I0321 21:07:09.811141 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0321 21:07:09.811470 6 e2e.go:109] Starting e2e run "45f670d0-e3a9-46e2-ab39-c3b5a85ec799" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584824828 - Will randomize all specs Will run 278 of 4843 specs Mar 21 21:07:09.869: INFO: >>> kubeConfig: /root/.kube/config Mar 21 21:07:09.873: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 21 21:07:09.904: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 21:07:09.930: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 21 21:07:09.930: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 21:07:09.930: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 21 21:07:09.938: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 21 21:07:09.938: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 21 21:07:09.938: INFO: e2e test version: v1.17.3 Mar 21 21:07:09.939: INFO: kube-apiserver version: v1.17.2 Mar 21 21:07:09.939: INFO: >>> kubeConfig: /root/.kube/config Mar 21 21:07:09.946: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:07:09.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Mar 21 21:07:10.024: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-569.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-569.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 21:07:16.111: INFO: DNS probes using dns-569/dns-test-695b6512-e7d5-4f46-990e-5222b61b5221 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:07:16.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-569" for this suite. • [SLOW TEST:6.248 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:07:16.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:07:16.258: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b669f8d4-0d1b-4abc-945c-d14908ba534a" in namespace "projected-6410" to be "success or failure" Mar 21 21:07:16.554: INFO: Pod "downwardapi-volume-b669f8d4-0d1b-4abc-945c-d14908ba534a": Phase="Pending", Reason="", readiness=false. Elapsed: 295.904241ms Mar 21 21:07:18.613: INFO: Pod "downwardapi-volume-b669f8d4-0d1b-4abc-945c-d14908ba534a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355119295s Mar 21 21:07:20.618: INFO: Pod "downwardapi-volume-b669f8d4-0d1b-4abc-945c-d14908ba534a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.359392797s STEP: Saw pod success Mar 21 21:07:20.618: INFO: Pod "downwardapi-volume-b669f8d4-0d1b-4abc-945c-d14908ba534a" satisfied condition "success or failure" Mar 21 21:07:20.620: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b669f8d4-0d1b-4abc-945c-d14908ba534a container client-container: STEP: delete the pod Mar 21 21:07:20.692: INFO: Waiting for pod downwardapi-volume-b669f8d4-0d1b-4abc-945c-d14908ba534a to disappear Mar 21 21:07:20.705: INFO: Pod downwardapi-volume-b669f8d4-0d1b-4abc-945c-d14908ba534a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:07:20.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6410" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:07:20.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 21 21:07:20.754: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 21:07:20.776: INFO: Waiting for terminating namespaces to be deleted... Mar 21 21:07:20.779: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 21 21:07:20.784: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:07:20.784: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:07:20.784: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:07:20.784: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 21:07:20.784: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 21 21:07:20.800: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:07:20.800: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:07:20.800: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:07:20.800: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-11a67415-5367-459c-94ca-4f47d543eb43 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-11a67415-5367-459c-94ca-4f47d543eb43 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-11a67415-5367-459c-94ca-4f47d543eb43 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:07:36.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2560" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.254 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":3,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:07:36.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 21 21:07:37.015: INFO: Waiting up to 5m0s for pod "downward-api-e83e58c9-efa3-458b-8619-0ebd9385a27c" in namespace "downward-api-9685" to be "success or failure" Mar 21 21:07:37.019: INFO: Pod "downward-api-e83e58c9-efa3-458b-8619-0ebd9385a27c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.412885ms Mar 21 21:07:39.022: INFO: Pod "downward-api-e83e58c9-efa3-458b-8619-0ebd9385a27c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006928837s Mar 21 21:07:41.027: INFO: Pod "downward-api-e83e58c9-efa3-458b-8619-0ebd9385a27c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011525663s STEP: Saw pod success Mar 21 21:07:41.027: INFO: Pod "downward-api-e83e58c9-efa3-458b-8619-0ebd9385a27c" satisfied condition "success or failure" Mar 21 21:07:41.030: INFO: Trying to get logs from node jerma-worker pod downward-api-e83e58c9-efa3-458b-8619-0ebd9385a27c container dapi-container: STEP: delete the pod Mar 21 21:07:41.049: INFO: Waiting for pod downward-api-e83e58c9-efa3-458b-8619-0ebd9385a27c to disappear Mar 21 21:07:41.059: INFO: Pod downward-api-e83e58c9-efa3-458b-8619-0ebd9385a27c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:07:41.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9685" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":157,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:07:41.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:07:41.691: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:07:43.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720421661, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720421661, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720421661, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720421661, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:07:46.754: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:07:58.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2918" for this suite. STEP: Destroying namespace "webhook-2918-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.985 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":5,"skipped":157,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:07:59.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:08:04.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8887" for this suite. • [SLOW TEST:5.706 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":6,"skipped":165,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:08:04.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5018.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5018.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5018.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5018.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5018.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5018.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5018.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5018.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5018.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5018.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 161.139.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.139.161_udp@PTR;check="$$(dig +tcp +noall +answer +search 161.139.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.139.161_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5018.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5018.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5018.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5018.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5018.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5018.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5018.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5018.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5018.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5018.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5018.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 161.139.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.139.161_udp@PTR;check="$$(dig +tcp +noall +answer +search 161.139.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.139.161_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 21:08:09.005: INFO: Unable to read wheezy_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:09.008: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:09.010: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:09.013: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:09.034: INFO: Unable to read jessie_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:09.037: INFO: Unable to read jessie_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:09.039: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:09.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:09.060: INFO: Lookups using dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3 failed for: [wheezy_udp@dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_udp@dns-test-service.dns-5018.svc.cluster.local jessie_tcp@dns-test-service.dns-5018.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local] Mar 21 21:08:14.065: INFO: Unable to read wheezy_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:14.069: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:14.072: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:14.076: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:14.119: INFO: Unable to read jessie_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:14.122: INFO: Unable to read jessie_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:14.125: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:14.128: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:14.146: INFO: Lookups using dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3 failed for: [wheezy_udp@dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_udp@dns-test-service.dns-5018.svc.cluster.local jessie_tcp@dns-test-service.dns-5018.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local] Mar 21 21:08:19.064: INFO: Unable to read wheezy_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:19.068: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:19.071: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:19.075: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:19.099: INFO: Unable to read jessie_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:19.102: INFO: Unable to read jessie_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:19.105: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:19.108: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:19.156: INFO: Lookups using dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3 failed for: [wheezy_udp@dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_udp@dns-test-service.dns-5018.svc.cluster.local jessie_tcp@dns-test-service.dns-5018.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local] Mar 21 21:08:24.064: INFO: Unable to read wheezy_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:24.068: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:24.072: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:24.075: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:24.099: INFO: Unable to read jessie_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:24.103: INFO: Unable to read jessie_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:24.106: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:24.109: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:24.129: INFO: Lookups using dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3 failed for: [wheezy_udp@dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_udp@dns-test-service.dns-5018.svc.cluster.local jessie_tcp@dns-test-service.dns-5018.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local] Mar 21 21:08:29.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:29.072: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:29.076: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:29.078: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:29.100: INFO: Unable to read jessie_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:29.102: INFO: Unable to read jessie_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:29.105: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:29.107: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:29.150: INFO: Lookups using dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3 failed for: [wheezy_udp@dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_udp@dns-test-service.dns-5018.svc.cluster.local jessie_tcp@dns-test-service.dns-5018.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local] Mar 21 21:08:34.065: INFO: Unable to read wheezy_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:34.069: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:34.073: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:34.076: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:34.099: INFO: Unable to read jessie_udp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:34.103: INFO: Unable to read jessie_tcp@dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:34.106: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:34.110: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local from pod dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3: the server could not find the requested resource (get pods dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3) Mar 21 21:08:34.129: INFO: Lookups using dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3 failed for: [wheezy_udp@dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@dns-test-service.dns-5018.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_udp@dns-test-service.dns-5018.svc.cluster.local jessie_tcp@dns-test-service.dns-5018.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5018.svc.cluster.local] Mar 21 21:08:39.117: INFO: DNS probes using dns-5018/dns-test-88a91c13-ec6a-476d-961e-1a471196ddd3 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:08:39.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5018" for this suite. • [SLOW TEST:35.251 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":7,"skipped":175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:08:40.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 21 21:08:44.861: INFO: Successfully updated pod "pod-update-activedeadlineseconds-708f803d-054d-44c2-9ba9-abb18fd09550" Mar 21 21:08:44.861: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-708f803d-054d-44c2-9ba9-abb18fd09550" in namespace "pods-3551" to be "terminated due to deadline exceeded" Mar 21 21:08:44.864: INFO: Pod "pod-update-activedeadlineseconds-708f803d-054d-44c2-9ba9-abb18fd09550": Phase="Running", Reason="", readiness=true. Elapsed: 2.824551ms Mar 21 21:08:46.868: INFO: Pod "pod-update-activedeadlineseconds-708f803d-054d-44c2-9ba9-abb18fd09550": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.006958862s Mar 21 21:08:46.868: INFO: Pod "pod-update-activedeadlineseconds-708f803d-054d-44c2-9ba9-abb18fd09550" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:08:46.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3551" for this suite. • [SLOW TEST:6.870 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":203,"failed":0} [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:08:46.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-1315 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1315 to expose endpoints map[] Mar 21 21:08:46.997: INFO: Get endpoints failed (11.132433ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 21 21:08:48.001: INFO: successfully validated that service endpoint-test2 in namespace services-1315 exposes endpoints map[] (1.014744923s elapsed) STEP: Creating pod pod1 in namespace services-1315 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1315 to expose endpoints map[pod1:[80]] Mar 21 21:08:51.043: INFO: successfully validated that service endpoint-test2 in namespace services-1315 exposes endpoints map[pod1:[80]] (3.034100794s elapsed) STEP: Creating pod pod2 in namespace services-1315 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1315 to expose endpoints map[pod1:[80] pod2:[80]] Mar 21 21:08:54.170: INFO: successfully validated that service endpoint-test2 in namespace services-1315 exposes endpoints map[pod1:[80] pod2:[80]] (3.124479629s elapsed) STEP: Deleting pod pod1 in namespace services-1315 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1315 to expose endpoints map[pod2:[80]] Mar 21 21:08:55.221: INFO: successfully validated that service endpoint-test2 in namespace services-1315 exposes endpoints map[pod2:[80]] (1.045950821s elapsed) STEP: Deleting pod pod2 in namespace services-1315 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1315 to expose endpoints map[] Mar 21 21:08:56.360: INFO: successfully validated that service endpoint-test2 in namespace services-1315 exposes endpoints map[] (1.134342127s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:08:56.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1315" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.548 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":9,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:08:56.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:08:56.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5863" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":230,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:08:56.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:09:01.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3357" for this suite. • [SLOW TEST:5.293 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":11,"skipped":239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:09:01.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:09:02.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2db182d4-5abb-456a-9270-cb9e31ab7eec" in namespace "downward-api-1146" to be "success or failure" Mar 21 21:09:02.026: INFO: Pod "downwardapi-volume-2db182d4-5abb-456a-9270-cb9e31ab7eec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086437ms Mar 21 21:09:04.029: INFO: Pod "downwardapi-volume-2db182d4-5abb-456a-9270-cb9e31ab7eec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00793056s Mar 21 21:09:06.033: INFO: Pod "downwardapi-volume-2db182d4-5abb-456a-9270-cb9e31ab7eec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01160018s STEP: Saw pod success Mar 21 21:09:06.033: INFO: Pod "downwardapi-volume-2db182d4-5abb-456a-9270-cb9e31ab7eec" satisfied condition "success or failure" Mar 21 21:09:06.036: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-2db182d4-5abb-456a-9270-cb9e31ab7eec container client-container: STEP: delete the pod Mar 21 21:09:06.070: INFO: Waiting for pod downwardapi-volume-2db182d4-5abb-456a-9270-cb9e31ab7eec to disappear Mar 21 21:09:06.082: INFO: Pod downwardapi-volume-2db182d4-5abb-456a-9270-cb9e31ab7eec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:09:06.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1146" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:09:06.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:09:24.196: INFO: Container started at 2020-03-21 21:09:08 +0000 UTC, pod became ready at 2020-03-21 21:09:23 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:09:24.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2428" for this suite. • [SLOW TEST:18.116 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":299,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:09:24.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 21 21:09:24.261: INFO: Waiting up to 5m0s for pod "pod-da0db3cc-e3fa-4d56-8eb6-3a1e59a42f47" in namespace "emptydir-8077" to be "success or failure" Mar 21 21:09:24.265: INFO: Pod "pod-da0db3cc-e3fa-4d56-8eb6-3a1e59a42f47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949888ms Mar 21 21:09:26.270: INFO: Pod "pod-da0db3cc-e3fa-4d56-8eb6-3a1e59a42f47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008144194s Mar 21 21:09:28.274: INFO: Pod "pod-da0db3cc-e3fa-4d56-8eb6-3a1e59a42f47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012279607s STEP: Saw pod success Mar 21 21:09:28.274: INFO: Pod "pod-da0db3cc-e3fa-4d56-8eb6-3a1e59a42f47" satisfied condition "success or failure" Mar 21 21:09:28.277: INFO: Trying to get logs from node jerma-worker2 pod pod-da0db3cc-e3fa-4d56-8eb6-3a1e59a42f47 container test-container: STEP: delete the pod Mar 21 21:09:28.297: INFO: Waiting for pod pod-da0db3cc-e3fa-4d56-8eb6-3a1e59a42f47 to disappear Mar 21 21:09:28.301: INFO: Pod pod-da0db3cc-e3fa-4d56-8eb6-3a1e59a42f47 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:09:28.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8077" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":304,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:09:28.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-a27403d2-b503-4303-8e2e-58c1785e7d2d in namespace container-probe-2902 Mar 21 21:09:32.389: INFO: Started pod busybox-a27403d2-b503-4303-8e2e-58c1785e7d2d in namespace container-probe-2902 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 21:09:32.392: INFO: Initial restart count of pod busybox-a27403d2-b503-4303-8e2e-58c1785e7d2d is 0 Mar 21 21:10:24.553: INFO: Restart count of pod container-probe-2902/busybox-a27403d2-b503-4303-8e2e-58c1785e7d2d is now 1 (52.160967605s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:10:24.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2902" for this suite. • [SLOW TEST:56.281 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:10:24.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 21 21:10:24.652: INFO: >>> kubeConfig: /root/.kube/config Mar 21 21:10:27.592: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:10:37.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2190" for this suite. • [SLOW TEST:12.488 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":16,"skipped":391,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:10:37.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 21 21:10:37.151: INFO: Waiting up to 5m0s for pod "pod-dd8ddfd0-a5a1-4d60-8f59-4b8c517cfec0" in namespace "emptydir-9837" to be "success or failure" Mar 21 21:10:37.155: INFO: Pod "pod-dd8ddfd0-a5a1-4d60-8f59-4b8c517cfec0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459495ms Mar 21 21:10:39.174: INFO: Pod "pod-dd8ddfd0-a5a1-4d60-8f59-4b8c517cfec0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023500879s Mar 21 21:10:41.179: INFO: Pod "pod-dd8ddfd0-a5a1-4d60-8f59-4b8c517cfec0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027943593s STEP: Saw pod success Mar 21 21:10:41.179: INFO: Pod "pod-dd8ddfd0-a5a1-4d60-8f59-4b8c517cfec0" satisfied condition "success or failure" Mar 21 21:10:41.182: INFO: Trying to get logs from node jerma-worker pod pod-dd8ddfd0-a5a1-4d60-8f59-4b8c517cfec0 container test-container: STEP: delete the pod Mar 21 21:10:41.264: INFO: Waiting for pod pod-dd8ddfd0-a5a1-4d60-8f59-4b8c517cfec0 to disappear Mar 21 21:10:41.276: INFO: Pod pod-dd8ddfd0-a5a1-4d60-8f59-4b8c517cfec0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:10:41.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9837" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:10:41.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 21 21:10:46.408: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:10:47.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9136" for this suite. • [SLOW TEST:6.142 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":18,"skipped":445,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:10:47.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:10:51.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9000" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":454,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:10:51.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:10:51.609: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1665 I0321 21:10:51.638723 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1665, replica count: 1 I0321 21:10:52.689310 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 21:10:53.689528 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 21:10:54.689801 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 21:10:54.819: INFO: Created: latency-svc-cb6vm Mar 21 21:10:54.844: INFO: Got endpoints: latency-svc-cb6vm [53.997287ms] Mar 21 21:10:54.875: INFO: Created: latency-svc-7mt8d Mar 21 21:10:54.887: INFO: Got endpoints: latency-svc-7mt8d [43.699432ms] Mar 21 21:10:54.927: INFO: Created: latency-svc-kh7qx Mar 21 21:10:54.941: INFO: Got endpoints: latency-svc-kh7qx [96.825472ms] Mar 21 21:10:54.993: INFO: Created: latency-svc-kl2mj Mar 21 21:10:55.011: INFO: Got endpoints: latency-svc-kl2mj [166.922654ms] Mar 21 21:10:55.059: INFO: Created: latency-svc-26825 Mar 21 21:10:55.068: INFO: Got endpoints: latency-svc-26825 [224.24874ms] Mar 21 21:10:55.091: INFO: Created: latency-svc-ptj7c Mar 21 21:10:55.104: INFO: Got endpoints: latency-svc-ptj7c [260.208597ms] Mar 21 21:10:55.126: INFO: Created: latency-svc-rzl2l Mar 21 21:10:55.140: INFO: Got endpoints: latency-svc-rzl2l [296.30153ms] Mar 21 21:10:55.195: INFO: Created: latency-svc-rv454 Mar 21 21:10:55.199: INFO: Got endpoints: latency-svc-rv454 [355.091513ms] Mar 21 21:10:55.228: INFO: Created: latency-svc-t992s Mar 21 21:10:55.244: INFO: Got endpoints: latency-svc-t992s [400.573322ms] Mar 21 21:10:55.263: INFO: Created: latency-svc-7xqz9 Mar 21 21:10:55.273: INFO: Got endpoints: latency-svc-7xqz9 [429.587319ms] Mar 21 21:10:55.335: INFO: Created: latency-svc-g4q8h Mar 21 21:10:55.361: INFO: Created: latency-svc-sxq4r Mar 21 21:10:55.361: INFO: Got endpoints: latency-svc-g4q8h [517.526486ms] Mar 21 21:10:55.391: INFO: Got endpoints: latency-svc-sxq4r [547.089522ms] Mar 21 21:10:55.431: INFO: Created: latency-svc-qlglk Mar 21 21:10:55.484: INFO: Got endpoints: latency-svc-qlglk [640.518173ms] Mar 21 21:10:55.504: INFO: Created: latency-svc-kptjx Mar 21 21:10:55.514: INFO: Got endpoints: latency-svc-kptjx [670.19254ms] Mar 21 21:10:55.541: INFO: Created: latency-svc-s2mw8 Mar 21 21:10:55.563: INFO: Got endpoints: latency-svc-s2mw8 [718.868989ms] Mar 21 21:10:55.576: INFO: Created: latency-svc-knlvj Mar 21 21:10:55.647: INFO: Got endpoints: latency-svc-knlvj [803.243296ms] Mar 21 21:10:55.649: INFO: Created: latency-svc-57kcd Mar 21 21:10:55.689: INFO: Got endpoints: latency-svc-57kcd [801.62011ms] Mar 21 21:10:55.725: INFO: Created: latency-svc-xd94l Mar 21 21:10:55.796: INFO: Got endpoints: latency-svc-xd94l [855.73456ms] Mar 21 21:10:55.811: INFO: Created: latency-svc-9f7f8 Mar 21 21:10:55.827: INFO: Got endpoints: latency-svc-9f7f8 [816.337108ms] Mar 21 21:10:55.871: INFO: Created: latency-svc-m6cr6 Mar 21 21:10:55.887: INFO: Got endpoints: latency-svc-m6cr6 [819.067891ms] Mar 21 21:10:55.934: INFO: Created: latency-svc-zd457 Mar 21 21:10:55.938: INFO: Got endpoints: latency-svc-zd457 [834.153582ms] Mar 21 21:10:55.958: INFO: Created: latency-svc-mvnzq Mar 21 21:10:55.990: INFO: Got endpoints: latency-svc-mvnzq [849.983396ms] Mar 21 21:10:56.027: INFO: Created: latency-svc-hwnp9 Mar 21 21:10:56.078: INFO: Got endpoints: latency-svc-hwnp9 [879.015042ms] Mar 21 21:10:56.097: INFO: Created: latency-svc-6hsbj Mar 21 21:10:56.122: INFO: Got endpoints: latency-svc-6hsbj [877.857603ms] Mar 21 21:10:56.299: INFO: Created: latency-svc-md9lz Mar 21 21:10:56.308: INFO: Got endpoints: latency-svc-md9lz [1.034931384s] Mar 21 21:10:56.522: INFO: Created: latency-svc-m6wkz Mar 21 21:10:56.572: INFO: Got endpoints: latency-svc-m6wkz [1.210145744s] Mar 21 21:10:56.572: INFO: Created: latency-svc-7hthz Mar 21 21:10:56.594: INFO: Got endpoints: latency-svc-7hthz [1.203151634s] Mar 21 21:10:56.646: INFO: Created: latency-svc-vvmh7 Mar 21 21:10:56.650: INFO: Got endpoints: latency-svc-vvmh7 [1.165648424s] Mar 21 21:10:56.674: INFO: Created: latency-svc-rjm8p Mar 21 21:10:56.710: INFO: Got endpoints: latency-svc-rjm8p [1.195955255s] Mar 21 21:10:56.814: INFO: Created: latency-svc-r8dq7 Mar 21 21:10:56.819: INFO: Got endpoints: latency-svc-r8dq7 [1.255958032s] Mar 21 21:10:56.842: INFO: Created: latency-svc-zltvv Mar 21 21:10:56.859: INFO: Got endpoints: latency-svc-zltvv [1.211959722s] Mar 21 21:10:56.882: INFO: Created: latency-svc-b2lw4 Mar 21 21:10:56.889: INFO: Got endpoints: latency-svc-b2lw4 [1.199958505s] Mar 21 21:10:56.908: INFO: Created: latency-svc-6zk7t Mar 21 21:10:56.975: INFO: Got endpoints: latency-svc-6zk7t [1.179155941s] Mar 21 21:10:56.978: INFO: Created: latency-svc-h6wrf Mar 21 21:10:56.986: INFO: Got endpoints: latency-svc-h6wrf [1.158485308s] Mar 21 21:10:57.009: INFO: Created: latency-svc-wrvwz Mar 21 21:10:57.022: INFO: Got endpoints: latency-svc-wrvwz [1.135020669s] Mar 21 21:10:57.046: INFO: Created: latency-svc-gttj7 Mar 21 21:10:57.064: INFO: Got endpoints: latency-svc-gttj7 [1.125986193s] Mar 21 21:10:57.128: INFO: Created: latency-svc-qm9zx Mar 21 21:10:57.136: INFO: Got endpoints: latency-svc-qm9zx [1.145880723s] Mar 21 21:10:57.158: INFO: Created: latency-svc-tqwwq Mar 21 21:10:57.173: INFO: Got endpoints: latency-svc-tqwwq [1.095098315s] Mar 21 21:10:57.201: INFO: Created: latency-svc-67xff Mar 21 21:10:57.257: INFO: Got endpoints: latency-svc-67xff [1.134646354s] Mar 21 21:10:57.285: INFO: Created: latency-svc-frsz7 Mar 21 21:10:57.299: INFO: Got endpoints: latency-svc-frsz7 [990.831952ms] Mar 21 21:10:57.331: INFO: Created: latency-svc-v4trx Mar 21 21:10:57.343: INFO: Got endpoints: latency-svc-v4trx [771.572941ms] Mar 21 21:10:57.401: INFO: Created: latency-svc-np8jx Mar 21 21:10:57.404: INFO: Got endpoints: latency-svc-np8jx [809.767494ms] Mar 21 21:10:57.430: INFO: Created: latency-svc-dv8ff Mar 21 21:10:57.444: INFO: Got endpoints: latency-svc-dv8ff [793.441377ms] Mar 21 21:10:57.472: INFO: Created: latency-svc-nvhf7 Mar 21 21:10:57.496: INFO: Got endpoints: latency-svc-nvhf7 [786.176788ms] Mar 21 21:10:57.568: INFO: Created: latency-svc-bhsxm Mar 21 21:10:57.576: INFO: Got endpoints: latency-svc-bhsxm [757.278099ms] Mar 21 21:10:57.603: INFO: Created: latency-svc-smw7j Mar 21 21:10:57.619: INFO: Got endpoints: latency-svc-smw7j [759.641235ms] Mar 21 21:10:57.640: INFO: Created: latency-svc-wzflb Mar 21 21:10:57.655: INFO: Got endpoints: latency-svc-wzflb [766.158577ms] Mar 21 21:10:57.712: INFO: Created: latency-svc-l6mxv Mar 21 21:10:57.735: INFO: Created: latency-svc-f7v2r Mar 21 21:10:57.735: INFO: Got endpoints: latency-svc-l6mxv [759.504756ms] Mar 21 21:10:57.748: INFO: Got endpoints: latency-svc-f7v2r [762.361916ms] Mar 21 21:10:57.777: INFO: Created: latency-svc-k54z7 Mar 21 21:10:57.790: INFO: Got endpoints: latency-svc-k54z7 [767.281344ms] Mar 21 21:10:57.856: INFO: Created: latency-svc-4xkhj Mar 21 21:10:57.859: INFO: Got endpoints: latency-svc-4xkhj [794.736458ms] Mar 21 21:10:57.923: INFO: Created: latency-svc-tjpcb Mar 21 21:10:57.940: INFO: Got endpoints: latency-svc-tjpcb [803.937064ms] Mar 21 21:10:58.019: INFO: Created: latency-svc-9vmtx Mar 21 21:10:58.030: INFO: Got endpoints: latency-svc-9vmtx [857.221142ms] Mar 21 21:10:58.076: INFO: Created: latency-svc-pkkcj Mar 21 21:10:58.097: INFO: Got endpoints: latency-svc-pkkcj [839.697464ms] Mar 21 21:10:58.162: INFO: Created: latency-svc-9r9f4 Mar 21 21:10:58.165: INFO: Got endpoints: latency-svc-9r9f4 [866.091386ms] Mar 21 21:10:58.208: INFO: Created: latency-svc-2lrmc Mar 21 21:10:58.223: INFO: Got endpoints: latency-svc-2lrmc [879.640173ms] Mar 21 21:10:58.250: INFO: Created: latency-svc-jqk8n Mar 21 21:10:58.260: INFO: Got endpoints: latency-svc-jqk8n [855.405025ms] Mar 21 21:10:58.325: INFO: Created: latency-svc-f5g56 Mar 21 21:10:58.342: INFO: Got endpoints: latency-svc-f5g56 [898.192644ms] Mar 21 21:10:58.366: INFO: Created: latency-svc-mr9nd Mar 21 21:10:58.379: INFO: Got endpoints: latency-svc-mr9nd [882.91176ms] Mar 21 21:10:58.402: INFO: Created: latency-svc-vxlc9 Mar 21 21:10:58.416: INFO: Got endpoints: latency-svc-vxlc9 [839.27094ms] Mar 21 21:10:58.474: INFO: Created: latency-svc-dhb58 Mar 21 21:10:58.497: INFO: Got endpoints: latency-svc-dhb58 [877.74208ms] Mar 21 21:10:58.522: INFO: Created: latency-svc-gnq2f Mar 21 21:10:58.536: INFO: Got endpoints: latency-svc-gnq2f [880.729428ms] Mar 21 21:10:58.564: INFO: Created: latency-svc-bd5wp Mar 21 21:10:58.598: INFO: Got endpoints: latency-svc-bd5wp [863.100843ms] Mar 21 21:10:58.624: INFO: Created: latency-svc-5hw79 Mar 21 21:10:58.639: INFO: Got endpoints: latency-svc-5hw79 [890.839626ms] Mar 21 21:10:58.779: INFO: Created: latency-svc-q87pl Mar 21 21:10:58.784: INFO: Got endpoints: latency-svc-q87pl [994.340249ms] Mar 21 21:10:58.810: INFO: Created: latency-svc-n9mdr Mar 21 21:10:58.826: INFO: Got endpoints: latency-svc-n9mdr [966.591487ms] Mar 21 21:10:58.852: INFO: Created: latency-svc-mcb5v Mar 21 21:10:58.867: INFO: Got endpoints: latency-svc-mcb5v [926.984324ms] Mar 21 21:10:58.928: INFO: Created: latency-svc-fv5lb Mar 21 21:10:58.946: INFO: Got endpoints: latency-svc-fv5lb [916.034347ms] Mar 21 21:10:58.984: INFO: Created: latency-svc-lld9r Mar 21 21:10:59.000: INFO: Got endpoints: latency-svc-lld9r [902.791708ms] Mar 21 21:10:59.026: INFO: Created: latency-svc-jlzq4 Mar 21 21:10:59.084: INFO: Got endpoints: latency-svc-jlzq4 [918.482431ms] Mar 21 21:10:59.115: INFO: Created: latency-svc-v2rm7 Mar 21 21:10:59.126: INFO: Got endpoints: latency-svc-v2rm7 [903.12821ms] Mar 21 21:10:59.150: INFO: Created: latency-svc-4jnx9 Mar 21 21:10:59.197: INFO: Got endpoints: latency-svc-4jnx9 [937.497135ms] Mar 21 21:10:59.225: INFO: Created: latency-svc-zvwhq Mar 21 21:10:59.240: INFO: Got endpoints: latency-svc-zvwhq [898.353059ms] Mar 21 21:10:59.266: INFO: Created: latency-svc-6t9tw Mar 21 21:10:59.283: INFO: Got endpoints: latency-svc-6t9tw [903.419078ms] Mar 21 21:10:59.336: INFO: Created: latency-svc-hxtpv Mar 21 21:10:59.339: INFO: Got endpoints: latency-svc-hxtpv [923.009555ms] Mar 21 21:10:59.384: INFO: Created: latency-svc-wqnq8 Mar 21 21:10:59.397: INFO: Got endpoints: latency-svc-wqnq8 [900.543178ms] Mar 21 21:10:59.416: INFO: Created: latency-svc-nb6bk Mar 21 21:10:59.472: INFO: Got endpoints: latency-svc-nb6bk [936.076462ms] Mar 21 21:10:59.486: INFO: Created: latency-svc-kx9f5 Mar 21 21:10:59.499: INFO: Got endpoints: latency-svc-kx9f5 [901.035026ms] Mar 21 21:10:59.523: INFO: Created: latency-svc-v5w7z Mar 21 21:10:59.536: INFO: Got endpoints: latency-svc-v5w7z [896.692399ms] Mar 21 21:10:59.560: INFO: Created: latency-svc-kffwt Mar 21 21:10:59.616: INFO: Got endpoints: latency-svc-kffwt [831.71952ms] Mar 21 21:10:59.667: INFO: Created: latency-svc-57zk6 Mar 21 21:10:59.680: INFO: Got endpoints: latency-svc-57zk6 [853.940859ms] Mar 21 21:10:59.754: INFO: Created: latency-svc-khbqz Mar 21 21:10:59.758: INFO: Got endpoints: latency-svc-khbqz [890.44433ms] Mar 21 21:10:59.794: INFO: Created: latency-svc-qr7hf Mar 21 21:10:59.806: INFO: Got endpoints: latency-svc-qr7hf [859.956234ms] Mar 21 21:10:59.824: INFO: Created: latency-svc-pwbjn Mar 21 21:10:59.837: INFO: Got endpoints: latency-svc-pwbjn [837.323822ms] Mar 21 21:10:59.892: INFO: Created: latency-svc-m2vs5 Mar 21 21:10:59.896: INFO: Got endpoints: latency-svc-m2vs5 [811.768305ms] Mar 21 21:10:59.948: INFO: Created: latency-svc-4w9qh Mar 21 21:10:59.963: INFO: Got endpoints: latency-svc-4w9qh [836.61799ms] Mar 21 21:10:59.984: INFO: Created: latency-svc-tjbx2 Mar 21 21:11:00.042: INFO: Got endpoints: latency-svc-tjbx2 [844.352263ms] Mar 21 21:11:00.076: INFO: Created: latency-svc-9jnnt Mar 21 21:11:00.089: INFO: Got endpoints: latency-svc-9jnnt [848.844533ms] Mar 21 21:11:00.122: INFO: Created: latency-svc-h46fx Mar 21 21:11:00.179: INFO: Got endpoints: latency-svc-h46fx [896.037187ms] Mar 21 21:11:00.200: INFO: Created: latency-svc-b5hgr Mar 21 21:11:00.216: INFO: Got endpoints: latency-svc-b5hgr [877.072368ms] Mar 21 21:11:00.238: INFO: Created: latency-svc-bmkkj Mar 21 21:11:00.256: INFO: Got endpoints: latency-svc-bmkkj [859.132219ms] Mar 21 21:11:00.329: INFO: Created: latency-svc-p6zbl Mar 21 21:11:00.333: INFO: Got endpoints: latency-svc-p6zbl [860.32445ms] Mar 21 21:11:00.380: INFO: Created: latency-svc-98hmw Mar 21 21:11:00.397: INFO: Got endpoints: latency-svc-98hmw [897.809254ms] Mar 21 21:11:00.467: INFO: Created: latency-svc-9twkm Mar 21 21:11:00.469: INFO: Got endpoints: latency-svc-9twkm [933.652198ms] Mar 21 21:11:00.496: INFO: Created: latency-svc-dttg7 Mar 21 21:11:00.505: INFO: Got endpoints: latency-svc-dttg7 [889.518214ms] Mar 21 21:11:00.526: INFO: Created: latency-svc-fslxt Mar 21 21:11:00.665: INFO: Got endpoints: latency-svc-fslxt [985.499414ms] Mar 21 21:11:00.674: INFO: Created: latency-svc-hsxc6 Mar 21 21:11:00.764: INFO: Got endpoints: latency-svc-hsxc6 [1.005746908s] Mar 21 21:11:00.826: INFO: Created: latency-svc-d7kd2 Mar 21 21:11:00.836: INFO: Got endpoints: latency-svc-d7kd2 [1.029235742s] Mar 21 21:11:00.856: INFO: Created: latency-svc-5s6ts Mar 21 21:11:00.866: INFO: Got endpoints: latency-svc-5s6ts [1.028704706s] Mar 21 21:11:00.902: INFO: Created: latency-svc-7x4xx Mar 21 21:11:00.964: INFO: Got endpoints: latency-svc-7x4xx [1.068195653s] Mar 21 21:11:01.011: INFO: Created: latency-svc-gnjhj Mar 21 21:11:01.048: INFO: Got endpoints: latency-svc-gnjhj [1.085075858s] Mar 21 21:11:01.107: INFO: Created: latency-svc-pgn6d Mar 21 21:11:01.111: INFO: Got endpoints: latency-svc-pgn6d [1.069479625s] Mar 21 21:11:01.136: INFO: Created: latency-svc-tvb72 Mar 21 21:11:01.160: INFO: Got endpoints: latency-svc-tvb72 [1.070578166s] Mar 21 21:11:01.184: INFO: Created: latency-svc-rg8b2 Mar 21 21:11:01.197: INFO: Got endpoints: latency-svc-rg8b2 [1.018108453s] Mar 21 21:11:01.253: INFO: Created: latency-svc-n64k6 Mar 21 21:11:01.253: INFO: Got endpoints: latency-svc-n64k6 [1.037174333s] Mar 21 21:11:01.282: INFO: Created: latency-svc-8nmmk Mar 21 21:11:01.294: INFO: Got endpoints: latency-svc-8nmmk [1.037484881s] Mar 21 21:11:01.316: INFO: Created: latency-svc-gxdfr Mar 21 21:11:01.330: INFO: Got endpoints: latency-svc-gxdfr [997.352325ms] Mar 21 21:11:01.383: INFO: Created: latency-svc-bcvvb Mar 21 21:11:01.387: INFO: Got endpoints: latency-svc-bcvvb [989.508568ms] Mar 21 21:11:01.418: INFO: Created: latency-svc-5twfx Mar 21 21:11:01.433: INFO: Got endpoints: latency-svc-5twfx [963.259767ms] Mar 21 21:11:01.449: INFO: Created: latency-svc-cfgxj Mar 21 21:11:01.463: INFO: Got endpoints: latency-svc-cfgxj [957.331961ms] Mar 21 21:11:01.515: INFO: Created: latency-svc-frfzj Mar 21 21:11:01.517: INFO: Got endpoints: latency-svc-frfzj [851.954884ms] Mar 21 21:11:01.588: INFO: Created: latency-svc-jpfk5 Mar 21 21:11:01.607: INFO: Got endpoints: latency-svc-jpfk5 [843.286402ms] Mar 21 21:11:01.666: INFO: Created: latency-svc-kh98l Mar 21 21:11:01.698: INFO: Got endpoints: latency-svc-kh98l [862.32574ms] Mar 21 21:11:01.718: INFO: Created: latency-svc-kgbnj Mar 21 21:11:01.734: INFO: Got endpoints: latency-svc-kgbnj [868.030454ms] Mar 21 21:11:01.784: INFO: Created: latency-svc-xmhfs Mar 21 21:11:01.794: INFO: Got endpoints: latency-svc-xmhfs [830.358893ms] Mar 21 21:11:01.828: INFO: Created: latency-svc-grzfw Mar 21 21:11:01.842: INFO: Got endpoints: latency-svc-grzfw [794.232732ms] Mar 21 21:11:01.863: INFO: Created: latency-svc-4zjhp Mar 21 21:11:01.879: INFO: Got endpoints: latency-svc-4zjhp [767.58033ms] Mar 21 21:11:01.921: INFO: Created: latency-svc-t2t99 Mar 21 21:11:01.925: INFO: Got endpoints: latency-svc-t2t99 [765.551274ms] Mar 21 21:11:01.977: INFO: Created: latency-svc-sdj6b Mar 21 21:11:02.083: INFO: Got endpoints: latency-svc-sdj6b [886.122685ms] Mar 21 21:11:02.096: INFO: Created: latency-svc-2589h Mar 21 21:11:02.123: INFO: Got endpoints: latency-svc-2589h [870.149999ms] Mar 21 21:11:02.152: INFO: Created: latency-svc-26mm4 Mar 21 21:11:02.176: INFO: Got endpoints: latency-svc-26mm4 [882.060041ms] Mar 21 21:11:02.227: INFO: Created: latency-svc-849hs Mar 21 21:11:02.252: INFO: Created: latency-svc-6mxzg Mar 21 21:11:02.252: INFO: Got endpoints: latency-svc-849hs [922.285721ms] Mar 21 21:11:02.283: INFO: Got endpoints: latency-svc-6mxzg [895.846507ms] Mar 21 21:11:02.312: INFO: Created: latency-svc-gh79n Mar 21 21:11:02.347: INFO: Got endpoints: latency-svc-gh79n [914.112626ms] Mar 21 21:11:02.362: INFO: Created: latency-svc-5htcj Mar 21 21:11:02.377: INFO: Got endpoints: latency-svc-5htcj [914.262283ms] Mar 21 21:11:02.398: INFO: Created: latency-svc-vkpnv Mar 21 21:11:02.407: INFO: Got endpoints: latency-svc-vkpnv [889.618264ms] Mar 21 21:11:02.426: INFO: Created: latency-svc-mjlwf Mar 21 21:11:02.443: INFO: Got endpoints: latency-svc-mjlwf [836.047373ms] Mar 21 21:11:02.491: INFO: Created: latency-svc-cqjdh Mar 21 21:11:02.494: INFO: Got endpoints: latency-svc-cqjdh [796.228887ms] Mar 21 21:11:02.524: INFO: Created: latency-svc-r5kwh Mar 21 21:11:02.540: INFO: Got endpoints: latency-svc-r5kwh [805.732353ms] Mar 21 21:11:02.560: INFO: Created: latency-svc-qs4wp Mar 21 21:11:02.570: INFO: Got endpoints: latency-svc-qs4wp [775.628364ms] Mar 21 21:11:02.629: INFO: Created: latency-svc-khr7j Mar 21 21:11:02.632: INFO: Got endpoints: latency-svc-khr7j [790.253069ms] Mar 21 21:11:02.666: INFO: Created: latency-svc-25tzw Mar 21 21:11:02.674: INFO: Got endpoints: latency-svc-25tzw [795.311175ms] Mar 21 21:11:02.696: INFO: Created: latency-svc-k8ftf Mar 21 21:11:02.709: INFO: Got endpoints: latency-svc-k8ftf [783.586257ms] Mar 21 21:11:02.726: INFO: Created: latency-svc-54ftk Mar 21 21:11:02.760: INFO: Got endpoints: latency-svc-54ftk [85.901193ms] Mar 21 21:11:02.770: INFO: Created: latency-svc-jkj6v Mar 21 21:11:02.794: INFO: Got endpoints: latency-svc-jkj6v [710.483657ms] Mar 21 21:11:02.830: INFO: Created: latency-svc-tw2gc Mar 21 21:11:02.842: INFO: Got endpoints: latency-svc-tw2gc [718.33001ms] Mar 21 21:11:02.904: INFO: Created: latency-svc-fgcgx Mar 21 21:11:02.907: INFO: Got endpoints: latency-svc-fgcgx [731.37366ms] Mar 21 21:11:02.936: INFO: Created: latency-svc-pltgp Mar 21 21:11:02.950: INFO: Got endpoints: latency-svc-pltgp [697.705132ms] Mar 21 21:11:02.972: INFO: Created: latency-svc-x8nd5 Mar 21 21:11:02.987: INFO: Got endpoints: latency-svc-x8nd5 [703.961015ms] Mar 21 21:11:03.042: INFO: Created: latency-svc-2qw87 Mar 21 21:11:03.052: INFO: Got endpoints: latency-svc-2qw87 [704.962813ms] Mar 21 21:11:03.092: INFO: Created: latency-svc-tvssc Mar 21 21:11:03.107: INFO: Got endpoints: latency-svc-tvssc [729.981133ms] Mar 21 21:11:03.128: INFO: Created: latency-svc-bx5gg Mar 21 21:11:03.167: INFO: Got endpoints: latency-svc-bx5gg [760.194016ms] Mar 21 21:11:03.214: INFO: Created: latency-svc-bn4sz Mar 21 21:11:03.228: INFO: Got endpoints: latency-svc-bn4sz [784.856149ms] Mar 21 21:11:03.251: INFO: Created: latency-svc-xjlnp Mar 21 21:11:03.264: INFO: Got endpoints: latency-svc-xjlnp [769.312717ms] Mar 21 21:11:03.305: INFO: Created: latency-svc-vzb9r Mar 21 21:11:03.309: INFO: Got endpoints: latency-svc-vzb9r [768.748674ms] Mar 21 21:11:03.338: INFO: Created: latency-svc-wfl56 Mar 21 21:11:03.354: INFO: Got endpoints: latency-svc-wfl56 [784.178779ms] Mar 21 21:11:03.380: INFO: Created: latency-svc-lsss8 Mar 21 21:11:03.397: INFO: Got endpoints: latency-svc-lsss8 [764.284687ms] Mar 21 21:11:03.437: INFO: Created: latency-svc-mf8kz Mar 21 21:11:03.439: INFO: Got endpoints: latency-svc-mf8kz [730.170141ms] Mar 21 21:11:03.491: INFO: Created: latency-svc-tfhhx Mar 21 21:11:03.499: INFO: Got endpoints: latency-svc-tfhhx [739.080045ms] Mar 21 21:11:03.518: INFO: Created: latency-svc-f6ddm Mar 21 21:11:03.574: INFO: Got endpoints: latency-svc-f6ddm [780.539648ms] Mar 21 21:11:03.602: INFO: Created: latency-svc-shpdb Mar 21 21:11:03.619: INFO: Got endpoints: latency-svc-shpdb [777.868551ms] Mar 21 21:11:03.640: INFO: Created: latency-svc-hnnhw Mar 21 21:11:03.664: INFO: Got endpoints: latency-svc-hnnhw [756.654024ms] Mar 21 21:11:03.730: INFO: Created: latency-svc-qhzkg Mar 21 21:11:03.734: INFO: Got endpoints: latency-svc-qhzkg [783.980987ms] Mar 21 21:11:03.752: INFO: Created: latency-svc-qbrhn Mar 21 21:11:03.764: INFO: Got endpoints: latency-svc-qbrhn [777.631107ms] Mar 21 21:11:03.794: INFO: Created: latency-svc-qfn75 Mar 21 21:11:03.806: INFO: Got endpoints: latency-svc-qfn75 [753.953263ms] Mar 21 21:11:03.874: INFO: Created: latency-svc-9zxpt Mar 21 21:11:03.884: INFO: Got endpoints: latency-svc-9zxpt [776.612982ms] Mar 21 21:11:03.904: INFO: Created: latency-svc-2bw4k Mar 21 21:11:03.920: INFO: Got endpoints: latency-svc-2bw4k [752.351143ms] Mar 21 21:11:03.940: INFO: Created: latency-svc-8dfwt Mar 21 21:11:03.950: INFO: Got endpoints: latency-svc-8dfwt [722.176815ms] Mar 21 21:11:03.968: INFO: Created: latency-svc-86qmm Mar 21 21:11:04.005: INFO: Got endpoints: latency-svc-86qmm [741.664301ms] Mar 21 21:11:04.010: INFO: Created: latency-svc-pp7qw Mar 21 21:11:04.023: INFO: Got endpoints: latency-svc-pp7qw [714.145701ms] Mar 21 21:11:04.060: INFO: Created: latency-svc-s5btx Mar 21 21:11:04.083: INFO: Got endpoints: latency-svc-s5btx [728.535302ms] Mar 21 21:11:04.149: INFO: Created: latency-svc-6n7bw Mar 21 21:11:04.152: INFO: Got endpoints: latency-svc-6n7bw [754.960921ms] Mar 21 21:11:04.190: INFO: Created: latency-svc-cpqdw Mar 21 21:11:04.203: INFO: Got endpoints: latency-svc-cpqdw [763.989918ms] Mar 21 21:11:04.226: INFO: Created: latency-svc-gmllp Mar 21 21:11:04.240: INFO: Got endpoints: latency-svc-gmllp [740.388653ms] Mar 21 21:11:04.287: INFO: Created: latency-svc-z5kjz Mar 21 21:11:04.311: INFO: Got endpoints: latency-svc-z5kjz [736.801706ms] Mar 21 21:11:04.312: INFO: Created: latency-svc-95whw Mar 21 21:11:04.324: INFO: Got endpoints: latency-svc-95whw [704.444842ms] Mar 21 21:11:04.342: INFO: Created: latency-svc-r7k4b Mar 21 21:11:04.355: INFO: Got endpoints: latency-svc-r7k4b [690.602118ms] Mar 21 21:11:04.382: INFO: Created: latency-svc-5s2bx Mar 21 21:11:04.436: INFO: Got endpoints: latency-svc-5s2bx [702.340322ms] Mar 21 21:11:04.438: INFO: Created: latency-svc-x7ws9 Mar 21 21:11:04.451: INFO: Got endpoints: latency-svc-x7ws9 [686.853213ms] Mar 21 21:11:04.480: INFO: Created: latency-svc-8pz5m Mar 21 21:11:04.510: INFO: Got endpoints: latency-svc-8pz5m [704.398231ms] Mar 21 21:11:04.574: INFO: Created: latency-svc-bqgwn Mar 21 21:11:04.576: INFO: Got endpoints: latency-svc-bqgwn [692.482419ms] Mar 21 21:11:04.604: INFO: Created: latency-svc-pzlcs Mar 21 21:11:04.628: INFO: Got endpoints: latency-svc-pzlcs [708.080487ms] Mar 21 21:11:04.660: INFO: Created: latency-svc-s2q5j Mar 21 21:11:04.668: INFO: Got endpoints: latency-svc-s2q5j [717.74819ms] Mar 21 21:11:04.718: INFO: Created: latency-svc-kfqzn Mar 21 21:11:04.722: INFO: Got endpoints: latency-svc-kfqzn [716.658311ms] Mar 21 21:11:04.744: INFO: Created: latency-svc-574pf Mar 21 21:11:04.759: INFO: Got endpoints: latency-svc-574pf [735.925107ms] Mar 21 21:11:04.784: INFO: Created: latency-svc-b8l8b Mar 21 21:11:04.801: INFO: Got endpoints: latency-svc-b8l8b [717.809576ms] Mar 21 21:11:04.858: INFO: Created: latency-svc-6nhhh Mar 21 21:11:04.881: INFO: Got endpoints: latency-svc-6nhhh [729.569993ms] Mar 21 21:11:04.882: INFO: Created: latency-svc-fpvln Mar 21 21:11:04.898: INFO: Got endpoints: latency-svc-fpvln [694.361004ms] Mar 21 21:11:04.923: INFO: Created: latency-svc-xvnkw Mar 21 21:11:04.946: INFO: Got endpoints: latency-svc-xvnkw [706.468505ms] Mar 21 21:11:04.991: INFO: Created: latency-svc-l94qr Mar 21 21:11:04.991: INFO: Got endpoints: latency-svc-l94qr [679.271823ms] Mar 21 21:11:05.054: INFO: Created: latency-svc-gtztz Mar 21 21:11:05.072: INFO: Got endpoints: latency-svc-gtztz [748.203222ms] Mar 21 21:11:05.131: INFO: Created: latency-svc-x2zkh Mar 21 21:11:05.138: INFO: Got endpoints: latency-svc-x2zkh [783.63349ms] Mar 21 21:11:05.186: INFO: Created: latency-svc-7dbrj Mar 21 21:11:05.198: INFO: Got endpoints: latency-svc-7dbrj [761.863025ms] Mar 21 21:11:05.228: INFO: Created: latency-svc-7776m Mar 21 21:11:05.263: INFO: Got endpoints: latency-svc-7776m [811.877576ms] Mar 21 21:11:05.294: INFO: Created: latency-svc-7kg69 Mar 21 21:11:05.319: INFO: Got endpoints: latency-svc-7kg69 [809.223093ms] Mar 21 21:11:05.349: INFO: Created: latency-svc-8hgt6 Mar 21 21:11:05.362: INFO: Got endpoints: latency-svc-8hgt6 [785.548396ms] Mar 21 21:11:05.400: INFO: Created: latency-svc-kjcxk Mar 21 21:11:05.410: INFO: Got endpoints: latency-svc-kjcxk [782.005055ms] Mar 21 21:11:05.432: INFO: Created: latency-svc-7tdgc Mar 21 21:11:05.446: INFO: Got endpoints: latency-svc-7tdgc [778.174193ms] Mar 21 21:11:05.468: INFO: Created: latency-svc-vrl9m Mar 21 21:11:05.476: INFO: Got endpoints: latency-svc-vrl9m [754.01177ms] Mar 21 21:11:05.498: INFO: Created: latency-svc-pcj25 Mar 21 21:11:05.532: INFO: Got endpoints: latency-svc-pcj25 [773.617876ms] Mar 21 21:11:05.548: INFO: Created: latency-svc-qfdf7 Mar 21 21:11:05.561: INFO: Got endpoints: latency-svc-qfdf7 [760.227476ms] Mar 21 21:11:05.584: INFO: Created: latency-svc-zpd8j Mar 21 21:11:05.593: INFO: Got endpoints: latency-svc-zpd8j [711.931154ms] Mar 21 21:11:05.630: INFO: Created: latency-svc-jg4fg Mar 21 21:11:05.694: INFO: Got endpoints: latency-svc-jg4fg [795.897705ms] Mar 21 21:11:05.715: INFO: Created: latency-svc-f6szk Mar 21 21:11:05.732: INFO: Got endpoints: latency-svc-f6szk [785.473257ms] Mar 21 21:11:05.757: INFO: Created: latency-svc-t2vlw Mar 21 21:11:05.768: INFO: Got endpoints: latency-svc-t2vlw [776.933755ms] Mar 21 21:11:05.838: INFO: Created: latency-svc-wt98g Mar 21 21:11:05.882: INFO: Created: latency-svc-qszq5 Mar 21 21:11:05.882: INFO: Got endpoints: latency-svc-wt98g [809.901896ms] Mar 21 21:11:05.894: INFO: Got endpoints: latency-svc-qszq5 [755.543701ms] Mar 21 21:11:05.919: INFO: Created: latency-svc-zhtv4 Mar 21 21:11:05.964: INFO: Got endpoints: latency-svc-zhtv4 [765.212091ms] Mar 21 21:11:05.973: INFO: Created: latency-svc-2gwbl Mar 21 21:11:05.985: INFO: Got endpoints: latency-svc-2gwbl [721.894441ms] Mar 21 21:11:06.010: INFO: Created: latency-svc-t9r8r Mar 21 21:11:06.021: INFO: Got endpoints: latency-svc-t9r8r [701.580667ms] Mar 21 21:11:06.050: INFO: Created: latency-svc-kf277 Mar 21 21:11:06.131: INFO: Got endpoints: latency-svc-kf277 [769.146408ms] Mar 21 21:11:06.131: INFO: Latencies: [43.699432ms 85.901193ms 96.825472ms 166.922654ms 224.24874ms 260.208597ms 296.30153ms 355.091513ms 400.573322ms 429.587319ms 517.526486ms 547.089522ms 640.518173ms 670.19254ms 679.271823ms 686.853213ms 690.602118ms 692.482419ms 694.361004ms 697.705132ms 701.580667ms 702.340322ms 703.961015ms 704.398231ms 704.444842ms 704.962813ms 706.468505ms 708.080487ms 710.483657ms 711.931154ms 714.145701ms 716.658311ms 717.74819ms 717.809576ms 718.33001ms 718.868989ms 721.894441ms 722.176815ms 728.535302ms 729.569993ms 729.981133ms 730.170141ms 731.37366ms 735.925107ms 736.801706ms 739.080045ms 740.388653ms 741.664301ms 748.203222ms 752.351143ms 753.953263ms 754.01177ms 754.960921ms 755.543701ms 756.654024ms 757.278099ms 759.504756ms 759.641235ms 760.194016ms 760.227476ms 761.863025ms 762.361916ms 763.989918ms 764.284687ms 765.212091ms 765.551274ms 766.158577ms 767.281344ms 767.58033ms 768.748674ms 769.146408ms 769.312717ms 771.572941ms 773.617876ms 775.628364ms 776.612982ms 776.933755ms 777.631107ms 777.868551ms 778.174193ms 780.539648ms 782.005055ms 783.586257ms 783.63349ms 783.980987ms 784.178779ms 784.856149ms 785.473257ms 785.548396ms 786.176788ms 790.253069ms 793.441377ms 794.232732ms 794.736458ms 795.311175ms 795.897705ms 796.228887ms 801.62011ms 803.243296ms 803.937064ms 805.732353ms 809.223093ms 809.767494ms 809.901896ms 811.768305ms 811.877576ms 816.337108ms 819.067891ms 830.358893ms 831.71952ms 834.153582ms 836.047373ms 836.61799ms 837.323822ms 839.27094ms 839.697464ms 843.286402ms 844.352263ms 848.844533ms 849.983396ms 851.954884ms 853.940859ms 855.405025ms 855.73456ms 857.221142ms 859.132219ms 859.956234ms 860.32445ms 862.32574ms 863.100843ms 866.091386ms 868.030454ms 870.149999ms 877.072368ms 877.74208ms 877.857603ms 879.015042ms 879.640173ms 880.729428ms 882.060041ms 882.91176ms 886.122685ms 889.518214ms 889.618264ms 890.44433ms 890.839626ms 895.846507ms 896.037187ms 896.692399ms 897.809254ms 898.192644ms 898.353059ms 900.543178ms 901.035026ms 902.791708ms 903.12821ms 903.419078ms 914.112626ms 914.262283ms 916.034347ms 918.482431ms 922.285721ms 923.009555ms 926.984324ms 933.652198ms 936.076462ms 937.497135ms 957.331961ms 963.259767ms 966.591487ms 985.499414ms 989.508568ms 990.831952ms 994.340249ms 997.352325ms 1.005746908s 1.018108453s 1.028704706s 1.029235742s 1.034931384s 1.037174333s 1.037484881s 1.068195653s 1.069479625s 1.070578166s 1.085075858s 1.095098315s 1.125986193s 1.134646354s 1.135020669s 1.145880723s 1.158485308s 1.165648424s 1.179155941s 1.195955255s 1.199958505s 1.203151634s 1.210145744s 1.211959722s 1.255958032s] Mar 21 21:11:06.131: INFO: 50 %ile: 805.732353ms Mar 21 21:11:06.131: INFO: 90 %ile: 1.037174333s Mar 21 21:11:06.131: INFO: 99 %ile: 1.211959722s Mar 21 21:11:06.131: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:11:06.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1665" for this suite. • [SLOW TEST:14.593 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":20,"skipped":464,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:11:06.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 21 21:11:10.217: INFO: &Pod{ObjectMeta:{send-events-2c8eccb2-a518-4895-bb0d-023314e8d770 events-5607 /api/v1/namespaces/events-5607/pods/send-events-2c8eccb2-a518-4895-bb0d-023314e8d770 d61ec0ab-ae37-4983-88f4-21ded3c52703 1641652 0 2020-03-21 21:11:06 +0000 UTC map[name:foo time:184686255] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mhjlc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mhjlc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mhjlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 21:11:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 21:11:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 21:11:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 21:11:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.100,StartTime:2020-03-21 21:11:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 21:11:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://31e936bdade89c03be7c1a6691047b2317610d82c6571d6ab82d3d0fba0f5d70,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 21 21:11:12.227: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 21 21:11:14.241: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:11:14.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5607" for this suite. • [SLOW TEST:8.438 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":21,"skipped":477,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:11:14.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:11:14.794: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb3bb48e-1758-40c9-8e11-c104bd9a60d4" in namespace "downward-api-6359" to be "success or failure" Mar 21 21:11:14.850: INFO: Pod "downwardapi-volume-bb3bb48e-1758-40c9-8e11-c104bd9a60d4": Phase="Pending", Reason="", readiness=false. Elapsed: 55.830063ms Mar 21 21:11:16.922: INFO: Pod "downwardapi-volume-bb3bb48e-1758-40c9-8e11-c104bd9a60d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127973182s Mar 21 21:11:18.944: INFO: Pod "downwardapi-volume-bb3bb48e-1758-40c9-8e11-c104bd9a60d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14985494s STEP: Saw pod success Mar 21 21:11:18.944: INFO: Pod "downwardapi-volume-bb3bb48e-1758-40c9-8e11-c104bd9a60d4" satisfied condition "success or failure" Mar 21 21:11:18.946: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-bb3bb48e-1758-40c9-8e11-c104bd9a60d4 container client-container: STEP: delete the pod Mar 21 21:11:19.226: INFO: Waiting for pod downwardapi-volume-bb3bb48e-1758-40c9-8e11-c104bd9a60d4 to disappear Mar 21 21:11:19.236: INFO: Pod downwardapi-volume-bb3bb48e-1758-40c9-8e11-c104bd9a60d4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:11:19.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6359" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":486,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:11:19.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-88377f96-9bbc-465e-a9c5-7f42bbde148e STEP: Creating a pod to test consume configMaps Mar 21 21:11:19.376: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bb2fc64f-f65a-4ea8-a753-3f6b58aeb215" in namespace "projected-4796" to be "success or failure" Mar 21 21:11:19.399: INFO: Pod "pod-projected-configmaps-bb2fc64f-f65a-4ea8-a753-3f6b58aeb215": Phase="Pending", Reason="", readiness=false. Elapsed: 23.031165ms Mar 21 21:11:21.412: INFO: Pod "pod-projected-configmaps-bb2fc64f-f65a-4ea8-a753-3f6b58aeb215": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035820984s Mar 21 21:11:23.437: INFO: Pod "pod-projected-configmaps-bb2fc64f-f65a-4ea8-a753-3f6b58aeb215": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06094956s STEP: Saw pod success Mar 21 21:11:23.437: INFO: Pod "pod-projected-configmaps-bb2fc64f-f65a-4ea8-a753-3f6b58aeb215" satisfied condition "success or failure" Mar 21 21:11:23.440: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-bb2fc64f-f65a-4ea8-a753-3f6b58aeb215 container projected-configmap-volume-test: STEP: delete the pod Mar 21 21:11:23.518: INFO: Waiting for pod pod-projected-configmaps-bb2fc64f-f65a-4ea8-a753-3f6b58aeb215 to disappear Mar 21 21:11:23.534: INFO: Pod pod-projected-configmaps-bb2fc64f-f65a-4ea8-a753-3f6b58aeb215 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:11:23.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4796" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":487,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:11:23.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 21 21:11:23.864: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8918 /api/v1/namespaces/watch-8918/configmaps/e2e-watch-test-resource-version 05863055-ed77-45e4-b63e-55496bd123cd 1642191 0 2020-03-21 21:11:23 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 21:11:23.864: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8918 /api/v1/namespaces/watch-8918/configmaps/e2e-watch-test-resource-version 05863055-ed77-45e4-b63e-55496bd123cd 1642193 0 2020-03-21 21:11:23 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:11:23.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8918" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":24,"skipped":490,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:11:23.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:11:24.058: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 21 21:11:24.088: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:24.099: INFO: Number of nodes with available pods: 0 Mar 21 21:11:24.100: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:11:25.163: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:25.166: INFO: Number of nodes with available pods: 0 Mar 21 21:11:25.166: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:11:26.117: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:26.129: INFO: Number of nodes with available pods: 0 Mar 21 21:11:26.129: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:11:27.222: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:27.240: INFO: Number of nodes with available pods: 0 Mar 21 21:11:27.240: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:11:28.125: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:28.151: INFO: Number of nodes with available pods: 1 Mar 21 21:11:28.151: INFO: Node jerma-worker2 is running more than one daemon pod Mar 21 21:11:29.118: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:29.139: INFO: Number of nodes with available pods: 2 Mar 21 21:11:29.139: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 21 21:11:29.391: INFO: Wrong image for pod: daemon-set-bf2hp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:29.391: INFO: Wrong image for pod: daemon-set-phcvd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:29.395: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:30.401: INFO: Wrong image for pod: daemon-set-bf2hp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:30.401: INFO: Wrong image for pod: daemon-set-phcvd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:30.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:31.458: INFO: Wrong image for pod: daemon-set-bf2hp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:31.458: INFO: Pod daemon-set-bf2hp is not available Mar 21 21:11:31.458: INFO: Wrong image for pod: daemon-set-phcvd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:31.461: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:32.407: INFO: Wrong image for pod: daemon-set-phcvd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:32.407: INFO: Pod daemon-set-wdcnc is not available Mar 21 21:11:32.428: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:33.403: INFO: Wrong image for pod: daemon-set-phcvd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:33.403: INFO: Pod daemon-set-wdcnc is not available Mar 21 21:11:33.420: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:34.400: INFO: Wrong image for pod: daemon-set-phcvd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:34.400: INFO: Pod daemon-set-wdcnc is not available Mar 21 21:11:34.403: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:35.401: INFO: Wrong image for pod: daemon-set-phcvd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:35.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:36.437: INFO: Wrong image for pod: daemon-set-phcvd. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 21 21:11:36.437: INFO: Pod daemon-set-phcvd is not available Mar 21 21:11:36.451: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:37.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:38.400: INFO: Pod daemon-set-8tl6l is not available Mar 21 21:11:38.404: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 21 21:11:38.407: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:38.410: INFO: Number of nodes with available pods: 1 Mar 21 21:11:38.410: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:11:39.417: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:39.420: INFO: Number of nodes with available pods: 1 Mar 21 21:11:39.420: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:11:40.418: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:11:40.432: INFO: Number of nodes with available pods: 2 Mar 21 21:11:40.432: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4610, will wait for the garbage collector to delete the pods Mar 21 21:11:40.505: INFO: Deleting DaemonSet.extensions daemon-set took: 6.828312ms Mar 21 21:11:40.605: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.21696ms Mar 21 21:11:49.508: INFO: Number of nodes with available pods: 0 Mar 21 21:11:49.508: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 21:11:49.511: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4610/daemonsets","resourceVersion":"1642540"},"items":null} Mar 21 21:11:49.533: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4610/pods","resourceVersion":"1642540"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:11:49.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4610" for this suite. • [SLOW TEST:25.659 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":25,"skipped":501,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:11:49.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a5b1a0e2-cc14-4988-8d34-ee44e39073f9 STEP: Creating a pod to test consume configMaps Mar 21 21:11:49.623: INFO: Waiting up to 5m0s for pod "pod-configmaps-9efa6a64-f4cd-443b-84e0-a683105a4281" in namespace "configmap-4887" to be "success or failure" Mar 21 21:11:49.626: INFO: Pod "pod-configmaps-9efa6a64-f4cd-443b-84e0-a683105a4281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931504ms Mar 21 21:11:51.629: INFO: Pod "pod-configmaps-9efa6a64-f4cd-443b-84e0-a683105a4281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00661844s Mar 21 21:11:53.633: INFO: Pod "pod-configmaps-9efa6a64-f4cd-443b-84e0-a683105a4281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010692394s STEP: Saw pod success Mar 21 21:11:53.634: INFO: Pod "pod-configmaps-9efa6a64-f4cd-443b-84e0-a683105a4281" satisfied condition "success or failure" Mar 21 21:11:53.637: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-9efa6a64-f4cd-443b-84e0-a683105a4281 container configmap-volume-test: STEP: delete the pod Mar 21 21:11:53.657: INFO: Waiting for pod pod-configmaps-9efa6a64-f4cd-443b-84e0-a683105a4281 to disappear Mar 21 21:11:53.670: INFO: Pod pod-configmaps-9efa6a64-f4cd-443b-84e0-a683105a4281 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:11:53.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4887" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":503,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:11:53.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-8e2a083a-6372-41bf-8ca8-9d9d3322c8df STEP: Creating secret with name s-test-opt-upd-761bf789-73a5-42c4-83a5-e95f548adb72 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8e2a083a-6372-41bf-8ca8-9d9d3322c8df STEP: Updating secret s-test-opt-upd-761bf789-73a5-42c4-83a5-e95f548adb72 STEP: Creating secret with name s-test-opt-create-5fac2a1c-fa70-4d88-a787-8099770c2b76 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:13:04.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-128" for this suite. • [SLOW TEST:70.599 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:13:04.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3425 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3425 I0321 21:13:04.436197 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3425, replica count: 2 I0321 21:13:07.486643 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 21:13:10.486901 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 21:13:10.486: INFO: Creating new exec pod Mar 21 21:13:15.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3425 execpodnqdls -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 21 21:13:18.062: INFO: stderr: "I0321 21:13:18.003852 37 log.go:172] (0xc0000f3760) (0xc0006e1ea0) Create stream\nI0321 21:13:18.003922 37 log.go:172] (0xc0000f3760) (0xc0006e1ea0) Stream added, broadcasting: 1\nI0321 21:13:18.006570 37 log.go:172] (0xc0000f3760) Reply frame received for 1\nI0321 21:13:18.006603 37 log.go:172] (0xc0000f3760) (0xc0006e1f40) Create stream\nI0321 21:13:18.006613 37 log.go:172] (0xc0000f3760) (0xc0006e1f40) Stream added, broadcasting: 3\nI0321 21:13:18.008688 37 log.go:172] (0xc0000f3760) Reply frame received for 3\nI0321 21:13:18.008715 37 log.go:172] (0xc0000f3760) (0xc0006a06e0) Create stream\nI0321 21:13:18.008727 37 log.go:172] (0xc0000f3760) (0xc0006a06e0) Stream added, broadcasting: 5\nI0321 21:13:18.009449 37 log.go:172] (0xc0000f3760) Reply frame received for 5\nI0321 21:13:18.055701 37 log.go:172] (0xc0000f3760) Data frame received for 3\nI0321 21:13:18.055727 37 log.go:172] (0xc0006e1f40) (3) Data frame handling\nI0321 21:13:18.055755 37 log.go:172] (0xc0000f3760) Data frame received for 5\nI0321 21:13:18.055785 37 log.go:172] (0xc0006a06e0) (5) Data frame handling\nI0321 21:13:18.055817 37 log.go:172] (0xc0006a06e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0321 21:13:18.055971 37 log.go:172] (0xc0000f3760) Data frame received for 5\nI0321 21:13:18.055986 37 log.go:172] (0xc0006a06e0) (5) Data frame handling\nI0321 21:13:18.057815 37 log.go:172] (0xc0000f3760) Data frame received for 1\nI0321 21:13:18.057832 37 log.go:172] (0xc0006e1ea0) (1) Data frame handling\nI0321 21:13:18.057844 37 log.go:172] (0xc0006e1ea0) (1) Data frame sent\nI0321 21:13:18.057857 37 log.go:172] (0xc0000f3760) (0xc0006e1ea0) Stream removed, broadcasting: 1\nI0321 21:13:18.057965 37 log.go:172] (0xc0000f3760) Go away received\nI0321 21:13:18.058340 37 log.go:172] (0xc0000f3760) (0xc0006e1ea0) Stream removed, broadcasting: 1\nI0321 21:13:18.058365 37 log.go:172] (0xc0000f3760) (0xc0006e1f40) Stream removed, broadcasting: 3\nI0321 21:13:18.058376 37 log.go:172] (0xc0000f3760) (0xc0006a06e0) Stream removed, broadcasting: 5\n" Mar 21 21:13:18.062: INFO: stdout: "" Mar 21 21:13:18.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3425 execpodnqdls -- /bin/sh -x -c nc -zv -t -w 2 10.106.191.63 80' Mar 21 21:13:18.271: INFO: stderr: "I0321 21:13:18.197935 63 log.go:172] (0xc0000f4dc0) (0xc0006bb900) Create stream\nI0321 21:13:18.197987 63 log.go:172] (0xc0000f4dc0) (0xc0006bb900) Stream added, broadcasting: 1\nI0321 21:13:18.200853 63 log.go:172] (0xc0000f4dc0) Reply frame received for 1\nI0321 21:13:18.200898 63 log.go:172] (0xc0000f4dc0) (0xc000a5a000) Create stream\nI0321 21:13:18.200914 63 log.go:172] (0xc0000f4dc0) (0xc000a5a000) Stream added, broadcasting: 3\nI0321 21:13:18.202185 63 log.go:172] (0xc0000f4dc0) Reply frame received for 3\nI0321 21:13:18.202261 63 log.go:172] (0xc0000f4dc0) (0xc00030c000) Create stream\nI0321 21:13:18.202294 63 log.go:172] (0xc0000f4dc0) (0xc00030c000) Stream added, broadcasting: 5\nI0321 21:13:18.203480 63 log.go:172] (0xc0000f4dc0) Reply frame received for 5\nI0321 21:13:18.264392 63 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0321 21:13:18.264437 63 log.go:172] (0xc00030c000) (5) Data frame handling\nI0321 21:13:18.264472 63 log.go:172] (0xc00030c000) (5) Data frame sent\nI0321 21:13:18.264488 63 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0321 21:13:18.264505 63 log.go:172] (0xc00030c000) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.191.63 80\nConnection to 10.106.191.63 80 port [tcp/http] succeeded!\nI0321 21:13:18.264545 63 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0321 21:13:18.264559 63 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0321 21:13:18.266424 63 log.go:172] (0xc0000f4dc0) Data frame received for 1\nI0321 21:13:18.266448 63 log.go:172] (0xc0006bb900) (1) Data frame handling\nI0321 21:13:18.266458 63 log.go:172] (0xc0006bb900) (1) Data frame sent\nI0321 21:13:18.266471 63 log.go:172] (0xc0000f4dc0) (0xc0006bb900) Stream removed, broadcasting: 1\nI0321 21:13:18.266528 63 log.go:172] (0xc0000f4dc0) Go away received\nI0321 21:13:18.266770 63 log.go:172] (0xc0000f4dc0) (0xc0006bb900) Stream removed, broadcasting: 1\nI0321 21:13:18.266782 63 log.go:172] (0xc0000f4dc0) (0xc000a5a000) Stream removed, broadcasting: 3\nI0321 21:13:18.266788 63 log.go:172] (0xc0000f4dc0) (0xc00030c000) Stream removed, broadcasting: 5\n" Mar 21 21:13:18.271: INFO: stdout: "" Mar 21 21:13:18.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3425 execpodnqdls -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31598' Mar 21 21:13:18.484: INFO: stderr: "I0321 21:13:18.412112 85 log.go:172] (0xc0000f4a50) (0xc00063fcc0) Create stream\nI0321 21:13:18.412185 85 log.go:172] (0xc0000f4a50) (0xc00063fcc0) Stream added, broadcasting: 1\nI0321 21:13:18.415277 85 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0321 21:13:18.415309 85 log.go:172] (0xc0000f4a50) (0xc00063fd60) Create stream\nI0321 21:13:18.415320 85 log.go:172] (0xc0000f4a50) (0xc00063fd60) Stream added, broadcasting: 3\nI0321 21:13:18.416574 85 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0321 21:13:18.416624 85 log.go:172] (0xc0000f4a50) (0xc00074f400) Create stream\nI0321 21:13:18.416640 85 log.go:172] (0xc0000f4a50) (0xc00074f400) Stream added, broadcasting: 5\nI0321 21:13:18.418206 85 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0321 21:13:18.476396 85 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0321 21:13:18.476439 85 log.go:172] (0xc00074f400) (5) Data frame handling\nI0321 21:13:18.476485 85 log.go:172] (0xc00074f400) (5) Data frame sent\nI0321 21:13:18.476556 85 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0321 21:13:18.476587 85 log.go:172] (0xc00074f400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31598\nConnection to 172.17.0.10 31598 port [tcp/31598] succeeded!\nI0321 21:13:18.476903 85 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0321 21:13:18.476938 85 log.go:172] (0xc00063fd60) (3) Data frame handling\nI0321 21:13:18.478853 85 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0321 21:13:18.478878 85 log.go:172] (0xc00063fcc0) (1) Data frame handling\nI0321 21:13:18.478894 85 log.go:172] (0xc00063fcc0) (1) Data frame sent\nI0321 21:13:18.478914 85 log.go:172] (0xc0000f4a50) (0xc00063fcc0) Stream removed, broadcasting: 1\nI0321 21:13:18.478946 85 log.go:172] (0xc0000f4a50) Go away received\nI0321 21:13:18.479230 85 log.go:172] (0xc0000f4a50) (0xc00063fcc0) Stream removed, broadcasting: 1\nI0321 21:13:18.479245 85 log.go:172] (0xc0000f4a50) (0xc00063fd60) Stream removed, broadcasting: 3\nI0321 21:13:18.479252 85 log.go:172] (0xc0000f4a50) (0xc00074f400) Stream removed, broadcasting: 5\n" Mar 21 21:13:18.484: INFO: stdout: "" Mar 21 21:13:18.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3425 execpodnqdls -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31598' Mar 21 21:13:18.695: INFO: stderr: "I0321 21:13:18.615544 105 log.go:172] (0xc0006f68f0) (0xc0006ec1e0) Create stream\nI0321 21:13:18.615593 105 log.go:172] (0xc0006f68f0) (0xc0006ec1e0) Stream added, broadcasting: 1\nI0321 21:13:18.619072 105 log.go:172] (0xc0006f68f0) Reply frame received for 1\nI0321 21:13:18.619130 105 log.go:172] (0xc0006f68f0) (0xc000551ae0) Create stream\nI0321 21:13:18.619143 105 log.go:172] (0xc0006f68f0) (0xc000551ae0) Stream added, broadcasting: 3\nI0321 21:13:18.620159 105 log.go:172] (0xc0006f68f0) Reply frame received for 3\nI0321 21:13:18.620205 105 log.go:172] (0xc0006f68f0) (0xc0003cc000) Create stream\nI0321 21:13:18.620216 105 log.go:172] (0xc0006f68f0) (0xc0003cc000) Stream added, broadcasting: 5\nI0321 21:13:18.621292 105 log.go:172] (0xc0006f68f0) Reply frame received for 5\nI0321 21:13:18.689300 105 log.go:172] (0xc0006f68f0) Data frame received for 5\nI0321 21:13:18.689372 105 log.go:172] (0xc0003cc000) (5) Data frame handling\nI0321 21:13:18.689385 105 log.go:172] (0xc0003cc000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 31598\nI0321 21:13:18.689701 105 log.go:172] (0xc0006f68f0) Data frame received for 5\nI0321 21:13:18.689717 105 log.go:172] (0xc0003cc000) (5) Data frame handling\nI0321 21:13:18.689726 105 log.go:172] (0xc0003cc000) (5) Data frame sent\nConnection to 172.17.0.8 31598 port [tcp/31598] succeeded!\nI0321 21:13:18.690045 105 log.go:172] (0xc0006f68f0) Data frame received for 3\nI0321 21:13:18.690074 105 log.go:172] (0xc000551ae0) (3) Data frame handling\nI0321 21:13:18.690214 105 log.go:172] (0xc0006f68f0) Data frame received for 5\nI0321 21:13:18.690232 105 log.go:172] (0xc0003cc000) (5) Data frame handling\nI0321 21:13:18.691785 105 log.go:172] (0xc0006f68f0) Data frame received for 1\nI0321 21:13:18.691810 105 log.go:172] (0xc0006ec1e0) (1) Data frame handling\nI0321 21:13:18.691825 105 log.go:172] (0xc0006ec1e0) (1) Data frame sent\nI0321 21:13:18.691851 105 log.go:172] (0xc0006f68f0) (0xc0006ec1e0) Stream removed, broadcasting: 1\nI0321 21:13:18.691869 105 log.go:172] (0xc0006f68f0) Go away received\nI0321 21:13:18.692150 105 log.go:172] (0xc0006f68f0) (0xc0006ec1e0) Stream removed, broadcasting: 1\nI0321 21:13:18.692167 105 log.go:172] (0xc0006f68f0) (0xc000551ae0) Stream removed, broadcasting: 3\nI0321 21:13:18.692174 105 log.go:172] (0xc0006f68f0) (0xc0003cc000) Stream removed, broadcasting: 5\n" Mar 21 21:13:18.696: INFO: stdout: "" Mar 21 21:13:18.696: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:13:18.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3425" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.459 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":28,"skipped":544,"failed":0} [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:13:18.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:13:18.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5990483e-a6a2-4c7b-a7e4-a9acea036293" in namespace "projected-1603" to be "success or failure" Mar 21 21:13:18.840: INFO: Pod "downwardapi-volume-5990483e-a6a2-4c7b-a7e4-a9acea036293": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300057ms Mar 21 21:13:20.875: INFO: Pod "downwardapi-volume-5990483e-a6a2-4c7b-a7e4-a9acea036293": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039563815s Mar 21 21:13:22.882: INFO: Pod "downwardapi-volume-5990483e-a6a2-4c7b-a7e4-a9acea036293": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045714062s STEP: Saw pod success Mar 21 21:13:22.882: INFO: Pod "downwardapi-volume-5990483e-a6a2-4c7b-a7e4-a9acea036293" satisfied condition "success or failure" Mar 21 21:13:22.884: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-5990483e-a6a2-4c7b-a7e4-a9acea036293 container client-container: STEP: delete the pod Mar 21 21:13:22.920: INFO: Waiting for pod downwardapi-volume-5990483e-a6a2-4c7b-a7e4-a9acea036293 to disappear Mar 21 21:13:22.933: INFO: Pod downwardapi-volume-5990483e-a6a2-4c7b-a7e4-a9acea036293 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:13:22.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1603" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":544,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:13:22.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:13:23.041: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 21 21:13:25.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5012 create -f -' Mar 21 21:13:29.788: INFO: stderr: "" Mar 21 21:13:29.788: INFO: stdout: "e2e-test-crd-publish-openapi-3708-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 21 21:13:29.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5012 delete e2e-test-crd-publish-openapi-3708-crds test-foo' Mar 21 21:13:30.005: INFO: stderr: "" Mar 21 21:13:30.005: INFO: stdout: "e2e-test-crd-publish-openapi-3708-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 21 21:13:30.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5012 apply -f -' Mar 21 21:13:30.217: INFO: stderr: "" Mar 21 21:13:30.217: INFO: stdout: "e2e-test-crd-publish-openapi-3708-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 21 21:13:30.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5012 delete e2e-test-crd-publish-openapi-3708-crds test-foo' Mar 21 21:13:30.339: INFO: stderr: "" Mar 21 21:13:30.339: INFO: stdout: "e2e-test-crd-publish-openapi-3708-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 21 21:13:30.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5012 create -f -' Mar 21 21:13:30.568: INFO: rc: 1 Mar 21 21:13:30.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5012 apply -f -' Mar 21 21:13:30.787: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 21 21:13:30.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5012 create -f -' Mar 21 21:13:31.014: INFO: rc: 1 Mar 21 21:13:31.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5012 apply -f -' Mar 21 21:13:31.250: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 21 21:13:31.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3708-crds' Mar 21 21:13:31.473: INFO: stderr: "" Mar 21 21:13:31.473: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3708-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 21 21:13:31.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3708-crds.metadata' Mar 21 21:13:31.712: INFO: stderr: "" Mar 21 21:13:31.713: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3708-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 21 21:13:31.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3708-crds.spec' Mar 21 21:13:31.945: INFO: stderr: "" Mar 21 21:13:31.945: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3708-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 21 21:13:31.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3708-crds.spec.bars' Mar 21 21:13:32.169: INFO: stderr: "" Mar 21 21:13:32.169: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3708-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 21 21:13:32.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3708-crds.spec.bars2' Mar 21 21:13:32.408: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:13:35.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5012" for this suite. • [SLOW TEST:12.376 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":30,"skipped":545,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:13:35.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 21 21:13:35.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9081' Mar 21 21:13:35.453: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 21:13:35.453: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Mar 21 21:13:35.461: INFO: scanned /root for discovery docs: Mar 21 21:13:35.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9081' Mar 21 21:13:51.252: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 21 21:13:51.252: INFO: stdout: "Created e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e\nScaling up e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 21 21:13:51.252: INFO: stdout: "Created e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e\nScaling up e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 21 21:13:51.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9081' Mar 21 21:13:51.351: INFO: stderr: "" Mar 21 21:13:51.351: INFO: stdout: "e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e-xdtpm " Mar 21 21:13:51.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e-xdtpm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9081' Mar 21 21:13:51.438: INFO: stderr: "" Mar 21 21:13:51.438: INFO: stdout: "true" Mar 21 21:13:51.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e-xdtpm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9081' Mar 21 21:13:51.528: INFO: stderr: "" Mar 21 21:13:51.528: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 21 21:13:51.528: INFO: e2e-test-httpd-rc-033e856e431e2072b8886e1f5b18835e-xdtpm is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 21 21:13:51.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9081' Mar 21 21:13:51.625: INFO: stderr: "" Mar 21 21:13:51.625: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:13:51.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9081" for this suite. • [SLOW TEST:16.319 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":31,"skipped":546,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:13:51.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:13:52.302: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:13:54.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422032, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422032, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422032, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422032, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:13:57.355: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:13:57.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1162" for this suite. STEP: Destroying namespace "webhook-1162-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.915 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":32,"skipped":547,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:13:57.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-e81c57bc-82e3-4d22-84fb-7f54921f4f06 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:13:57.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8563" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":33,"skipped":551,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:13:57.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 21 21:13:58.218: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:13:58.228: INFO: Number of nodes with available pods: 0 Mar 21 21:13:58.228: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:13:59.232: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:13:59.235: INFO: Number of nodes with available pods: 0 Mar 21 21:13:59.235: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:14:00.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:14:00.237: INFO: Number of nodes with available pods: 0 Mar 21 21:14:00.237: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:14:01.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:14:01.236: INFO: Number of nodes with available pods: 0 Mar 21 21:14:01.236: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:14:02.232: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:14:02.236: INFO: Number of nodes with available pods: 2 Mar 21 21:14:02.236: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 21 21:14:02.284: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:14:02.309: INFO: Number of nodes with available pods: 1 Mar 21 21:14:02.309: INFO: Node jerma-worker2 is running more than one daemon pod Mar 21 21:14:03.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:14:03.330: INFO: Number of nodes with available pods: 1 Mar 21 21:14:03.330: INFO: Node jerma-worker2 is running more than one daemon pod Mar 21 21:14:04.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:14:04.317: INFO: Number of nodes with available pods: 1 Mar 21 21:14:04.317: INFO: Node jerma-worker2 is running more than one daemon pod Mar 21 21:14:05.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:14:05.317: INFO: Number of nodes with available pods: 2 Mar 21 21:14:05.317: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8745, will wait for the garbage collector to delete the pods Mar 21 21:14:05.383: INFO: Deleting DaemonSet.extensions daemon-set took: 6.167427ms Mar 21 21:14:05.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.263462ms Mar 21 21:14:19.586: INFO: Number of nodes with available pods: 0 Mar 21 21:14:19.586: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 21:14:19.590: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8745/daemonsets","resourceVersion":"1643417"},"items":null} Mar 21 21:14:19.592: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8745/pods","resourceVersion":"1643417"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:14:19.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8745" for this suite. • [SLOW TEST:21.881 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":34,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:14:19.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:14:19.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 21 21:14:19.886: INFO: stderr: "" Mar 21 21:14:19.886: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:31:51Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:14:19.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5433" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":35,"skipped":607,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:14:19.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 21 21:14:19.955: INFO: Waiting up to 5m0s for pod "pod-d041b5d7-fbe5-4296-a739-48e80d45949d" in namespace "emptydir-6838" to be "success or failure" Mar 21 21:14:19.979: INFO: Pod "pod-d041b5d7-fbe5-4296-a739-48e80d45949d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.148281ms Mar 21 21:14:21.983: INFO: Pod "pod-d041b5d7-fbe5-4296-a739-48e80d45949d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027132659s Mar 21 21:14:23.987: INFO: Pod "pod-d041b5d7-fbe5-4296-a739-48e80d45949d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031669621s STEP: Saw pod success Mar 21 21:14:23.987: INFO: Pod "pod-d041b5d7-fbe5-4296-a739-48e80d45949d" satisfied condition "success or failure" Mar 21 21:14:23.990: INFO: Trying to get logs from node jerma-worker pod pod-d041b5d7-fbe5-4296-a739-48e80d45949d container test-container: STEP: delete the pod Mar 21 21:14:24.008: INFO: Waiting for pod pod-d041b5d7-fbe5-4296-a739-48e80d45949d to disappear Mar 21 21:14:24.013: INFO: Pod pod-d041b5d7-fbe5-4296-a739-48e80d45949d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:14:24.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6838" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":610,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:14:24.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-81a47741-51b0-4a5d-a0c7-e8cca21982d2 in namespace container-probe-3543 Mar 21 21:14:28.077: INFO: Started pod liveness-81a47741-51b0-4a5d-a0c7-e8cca21982d2 in namespace container-probe-3543 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 21:14:28.080: INFO: Initial restart count of pod liveness-81a47741-51b0-4a5d-a0c7-e8cca21982d2 is 0 Mar 21 21:14:48.124: INFO: Restart count of pod container-probe-3543/liveness-81a47741-51b0-4a5d-a0c7-e8cca21982d2 is now 1 (20.044581601s elapsed) Mar 21 21:15:08.201: INFO: Restart count of pod container-probe-3543/liveness-81a47741-51b0-4a5d-a0c7-e8cca21982d2 is now 2 (40.121329496s elapsed) Mar 21 21:15:28.260: INFO: Restart count of pod container-probe-3543/liveness-81a47741-51b0-4a5d-a0c7-e8cca21982d2 is now 3 (1m0.179894522s elapsed) Mar 21 21:15:48.313: INFO: Restart count of pod container-probe-3543/liveness-81a47741-51b0-4a5d-a0c7-e8cca21982d2 is now 4 (1m20.233140471s elapsed) Mar 21 21:17:00.454: INFO: Restart count of pod container-probe-3543/liveness-81a47741-51b0-4a5d-a0c7-e8cca21982d2 is now 5 (2m32.374611652s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:17:00.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3543" for this suite. • [SLOW TEST:156.456 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:17:00.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-89fe70e4-2852-4a25-b102-e984c0208c77 STEP: Creating a pod to test consume secrets Mar 21 21:17:00.595: INFO: Waiting up to 5m0s for pod "pod-secrets-d474afca-8da2-48d0-af50-edd8e8b1c20d" in namespace "secrets-4033" to be "success or failure" Mar 21 21:17:00.747: INFO: Pod "pod-secrets-d474afca-8da2-48d0-af50-edd8e8b1c20d": Phase="Pending", Reason="", readiness=false. Elapsed: 152.145941ms Mar 21 21:17:02.751: INFO: Pod "pod-secrets-d474afca-8da2-48d0-af50-edd8e8b1c20d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155981357s Mar 21 21:17:04.755: INFO: Pod "pod-secrets-d474afca-8da2-48d0-af50-edd8e8b1c20d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160246032s STEP: Saw pod success Mar 21 21:17:04.755: INFO: Pod "pod-secrets-d474afca-8da2-48d0-af50-edd8e8b1c20d" satisfied condition "success or failure" Mar 21 21:17:04.758: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d474afca-8da2-48d0-af50-edd8e8b1c20d container secret-volume-test: STEP: delete the pod Mar 21 21:17:04.791: INFO: Waiting for pod pod-secrets-d474afca-8da2-48d0-af50-edd8e8b1c20d to disappear Mar 21 21:17:04.812: INFO: Pod pod-secrets-d474afca-8da2-48d0-af50-edd8e8b1c20d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:17:04.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4033" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":644,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:17:04.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 21 21:17:04.907: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:04.932: INFO: Number of nodes with available pods: 0 Mar 21 21:17:04.932: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:05.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:05.940: INFO: Number of nodes with available pods: 0 Mar 21 21:17:05.940: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:06.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:06.941: INFO: Number of nodes with available pods: 0 Mar 21 21:17:06.941: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:07.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:07.941: INFO: Number of nodes with available pods: 0 Mar 21 21:17:07.941: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:08.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:08.941: INFO: Number of nodes with available pods: 0 Mar 21 21:17:08.941: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:09.957: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:09.960: INFO: Number of nodes with available pods: 2 Mar 21 21:17:09.960: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 21 21:17:09.996: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:09.999: INFO: Number of nodes with available pods: 1 Mar 21 21:17:09.999: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:11.013: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:11.016: INFO: Number of nodes with available pods: 1 Mar 21 21:17:11.016: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:12.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:12.012: INFO: Number of nodes with available pods: 1 Mar 21 21:17:12.012: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:13.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:13.007: INFO: Number of nodes with available pods: 1 Mar 21 21:17:13.007: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:14.004: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:14.023: INFO: Number of nodes with available pods: 1 Mar 21 21:17:14.023: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:15.004: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:15.007: INFO: Number of nodes with available pods: 1 Mar 21 21:17:15.007: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:16.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:16.006: INFO: Number of nodes with available pods: 1 Mar 21 21:17:16.006: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:17.004: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:17.008: INFO: Number of nodes with available pods: 1 Mar 21 21:17:17.008: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:18.002: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:18.005: INFO: Number of nodes with available pods: 1 Mar 21 21:17:18.005: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:19.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:19.007: INFO: Number of nodes with available pods: 1 Mar 21 21:17:19.007: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:20.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:20.006: INFO: Number of nodes with available pods: 1 Mar 21 21:17:20.006: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:21.013: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:21.015: INFO: Number of nodes with available pods: 1 Mar 21 21:17:21.015: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:22.004: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:22.008: INFO: Number of nodes with available pods: 1 Mar 21 21:17:22.008: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:17:23.004: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:17:23.008: INFO: Number of nodes with available pods: 2 Mar 21 21:17:23.008: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8423, will wait for the garbage collector to delete the pods Mar 21 21:17:23.071: INFO: Deleting DaemonSet.extensions daemon-set took: 6.06887ms Mar 21 21:17:23.371: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265831ms Mar 21 21:17:29.274: INFO: Number of nodes with available pods: 0 Mar 21 21:17:29.274: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 21:17:29.277: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8423/daemonsets","resourceVersion":"1644158"},"items":null} Mar 21 21:17:29.279: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8423/pods","resourceVersion":"1644158"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:17:29.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8423" for this suite. • [SLOW TEST:24.493 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":39,"skipped":646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:17:29.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:17:29.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40796bcd-2668-4806-83ef-2383387e905b" in namespace "projected-568" to be "success or failure" Mar 21 21:17:29.376: INFO: Pod "downwardapi-volume-40796bcd-2668-4806-83ef-2383387e905b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.677378ms Mar 21 21:17:31.394: INFO: Pod "downwardapi-volume-40796bcd-2668-4806-83ef-2383387e905b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021734406s Mar 21 21:17:33.398: INFO: Pod "downwardapi-volume-40796bcd-2668-4806-83ef-2383387e905b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025989259s STEP: Saw pod success Mar 21 21:17:33.399: INFO: Pod "downwardapi-volume-40796bcd-2668-4806-83ef-2383387e905b" satisfied condition "success or failure" Mar 21 21:17:33.402: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-40796bcd-2668-4806-83ef-2383387e905b container client-container: STEP: delete the pod Mar 21 21:17:33.419: INFO: Waiting for pod downwardapi-volume-40796bcd-2668-4806-83ef-2383387e905b to disappear Mar 21 21:17:33.424: INFO: Pod downwardapi-volume-40796bcd-2668-4806-83ef-2383387e905b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:17:33.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-568" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":681,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:17:33.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 21 21:17:33.504: INFO: namespace kubectl-6566 Mar 21 21:17:33.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6566' Mar 21 21:17:33.811: INFO: stderr: "" Mar 21 21:17:33.811: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 21 21:17:34.820: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:17:34.820: INFO: Found 0 / 1 Mar 21 21:17:35.815: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:17:35.815: INFO: Found 0 / 1 Mar 21 21:17:36.816: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:17:36.816: INFO: Found 0 / 1 Mar 21 21:17:37.815: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:17:37.815: INFO: Found 1 / 1 Mar 21 21:17:37.815: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 21 21:17:37.819: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:17:37.819: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 21 21:17:37.819: INFO: wait on agnhost-master startup in kubectl-6566 Mar 21 21:17:37.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-hx9g8 agnhost-master --namespace=kubectl-6566' Mar 21 21:17:37.945: INFO: stderr: "" Mar 21 21:17:37.945: INFO: stdout: "Paused\n" STEP: exposing RC Mar 21 21:17:37.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6566' Mar 21 21:17:38.105: INFO: stderr: "" Mar 21 21:17:38.105: INFO: stdout: "service/rm2 exposed\n" Mar 21 21:17:38.114: INFO: Service rm2 in namespace kubectl-6566 found. STEP: exposing service Mar 21 21:17:40.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6566' Mar 21 21:17:40.254: INFO: stderr: "" Mar 21 21:17:40.254: INFO: stdout: "service/rm3 exposed\n" Mar 21 21:17:40.293: INFO: Service rm3 in namespace kubectl-6566 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:17:42.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6566" for this suite. • [SLOW TEST:8.871 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":41,"skipped":684,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:17:42.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:17:42.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8bb70f5-ac04-4d06-9a63-ecea7259199d" in namespace "downward-api-5384" to be "success or failure" Mar 21 21:17:42.391: INFO: Pod "downwardapi-volume-c8bb70f5-ac04-4d06-9a63-ecea7259199d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.959249ms Mar 21 21:17:44.395: INFO: Pod "downwardapi-volume-c8bb70f5-ac04-4d06-9a63-ecea7259199d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008569002s Mar 21 21:17:46.399: INFO: Pod "downwardapi-volume-c8bb70f5-ac04-4d06-9a63-ecea7259199d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01281909s STEP: Saw pod success Mar 21 21:17:46.399: INFO: Pod "downwardapi-volume-c8bb70f5-ac04-4d06-9a63-ecea7259199d" satisfied condition "success or failure" Mar 21 21:17:46.403: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c8bb70f5-ac04-4d06-9a63-ecea7259199d container client-container: STEP: delete the pod Mar 21 21:17:46.435: INFO: Waiting for pod downwardapi-volume-c8bb70f5-ac04-4d06-9a63-ecea7259199d to disappear Mar 21 21:17:46.451: INFO: Pod downwardapi-volume-c8bb70f5-ac04-4d06-9a63-ecea7259199d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:17:46.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5384" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":691,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:17:46.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:17:46.520: INFO: Creating ReplicaSet my-hostname-basic-8c78e1a9-006d-4e07-8e26-9aea430d56ff Mar 21 21:17:46.540: INFO: Pod name my-hostname-basic-8c78e1a9-006d-4e07-8e26-9aea430d56ff: Found 0 pods out of 1 Mar 21 21:17:51.544: INFO: Pod name my-hostname-basic-8c78e1a9-006d-4e07-8e26-9aea430d56ff: Found 1 pods out of 1 Mar 21 21:17:51.544: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8c78e1a9-006d-4e07-8e26-9aea430d56ff" is running Mar 21 21:17:51.547: INFO: Pod "my-hostname-basic-8c78e1a9-006d-4e07-8e26-9aea430d56ff-rhs6n" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 21:17:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 21:17:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 21:17:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 21:17:46 +0000 UTC Reason: Message:}]) Mar 21 21:17:51.547: INFO: Trying to dial the pod Mar 21 21:17:56.559: INFO: Controller my-hostname-basic-8c78e1a9-006d-4e07-8e26-9aea430d56ff: Got expected result from replica 1 [my-hostname-basic-8c78e1a9-006d-4e07-8e26-9aea430d56ff-rhs6n]: "my-hostname-basic-8c78e1a9-006d-4e07-8e26-9aea430d56ff-rhs6n", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:17:56.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5731" for this suite. • [SLOW TEST:10.109 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":43,"skipped":722,"failed":0} SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:17:56.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 21 21:17:56.627: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1197" to be "success or failure" Mar 21 21:17:56.630: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884224ms Mar 21 21:17:58.635: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0071852s Mar 21 21:18:00.639: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.01162121s Mar 21 21:18:02.643: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015705252s STEP: Saw pod success Mar 21 21:18:02.643: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 21 21:18:02.646: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 21 21:18:02.667: INFO: Waiting for pod pod-host-path-test to disappear Mar 21 21:18:02.686: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:18:02.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1197" for this suite. • [SLOW TEST:6.126 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:18:02.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 21 21:18:02.726: INFO: PodSpec: initContainers in spec.initContainers Mar 21 21:18:48.222: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-376fb641-8ee2-4786-b1fd-ee85fb0c6ed9", GenerateName:"", Namespace:"init-container-6416", SelfLink:"/api/v1/namespaces/init-container-6416/pods/pod-init-376fb641-8ee2-4786-b1fd-ee85fb0c6ed9", UID:"bcfcc220-23bf-4cde-8225-69b06612ee30", ResourceVersion:"1644590", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720422282, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"726949638"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ckmh8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002545f00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ckmh8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ckmh8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ckmh8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00501c498), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00322c900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00501c520)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00501c540)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00501c548), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00501c54c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422282, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422282, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422282, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422282, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.117", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.117"}}, StartTime:(*v1.Time)(0xc002d85200), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a063f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a06460)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e9248c2c57bb6911e1cf60b202ca5f79a6382dfa2133b05eed53c3a5114feb55", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d85240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d85220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00501c5cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:18:48.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6416" for this suite. • [SLOW TEST:45.631 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":45,"skipped":800,"failed":0} SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:18:48.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-c7186d59-5f37-48c5-9a26-490c86fc74a2 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c7186d59-5f37-48c5-9a26-490c86fc74a2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:08.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2271" for this suite. • [SLOW TEST:80.481 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":802,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:08.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4165/secret-test-1896ff50-222e-4bce-a706-d8d8a27e1f93 STEP: Creating a pod to test consume secrets Mar 21 21:20:08.899: INFO: Waiting up to 5m0s for pod "pod-configmaps-3634ab54-2f68-4774-91e3-ee05c0e6c86e" in namespace "secrets-4165" to be "success or failure" Mar 21 21:20:08.902: INFO: Pod "pod-configmaps-3634ab54-2f68-4774-91e3-ee05c0e6c86e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.402846ms Mar 21 21:20:10.905: INFO: Pod "pod-configmaps-3634ab54-2f68-4774-91e3-ee05c0e6c86e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006589733s Mar 21 21:20:12.910: INFO: Pod "pod-configmaps-3634ab54-2f68-4774-91e3-ee05c0e6c86e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01096179s STEP: Saw pod success Mar 21 21:20:12.910: INFO: Pod "pod-configmaps-3634ab54-2f68-4774-91e3-ee05c0e6c86e" satisfied condition "success or failure" Mar 21 21:20:12.913: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-3634ab54-2f68-4774-91e3-ee05c0e6c86e container env-test: STEP: delete the pod Mar 21 21:20:12.954: INFO: Waiting for pod pod-configmaps-3634ab54-2f68-4774-91e3-ee05c0e6c86e to disappear Mar 21 21:20:12.962: INFO: Pod pod-configmaps-3634ab54-2f68-4774-91e3-ee05c0e6c86e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:12.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4165" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:12.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 21 21:20:13.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8326 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 21 21:20:16.476: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0321 21:20:16.389100 663 log.go:172] (0xc0001071e0) (0xc0009ee3c0) Create stream\nI0321 21:20:16.389367 663 log.go:172] (0xc0001071e0) (0xc0009ee3c0) Stream added, broadcasting: 1\nI0321 21:20:16.392218 663 log.go:172] (0xc0001071e0) Reply frame received for 1\nI0321 21:20:16.392286 663 log.go:172] (0xc0001071e0) (0xc0006af900) Create stream\nI0321 21:20:16.392325 663 log.go:172] (0xc0001071e0) (0xc0006af900) Stream added, broadcasting: 3\nI0321 21:20:16.393374 663 log.go:172] (0xc0001071e0) Reply frame received for 3\nI0321 21:20:16.393420 663 log.go:172] (0xc0001071e0) (0xc000139360) Create stream\nI0321 21:20:16.393439 663 log.go:172] (0xc0001071e0) (0xc000139360) Stream added, broadcasting: 5\nI0321 21:20:16.394308 663 log.go:172] (0xc0001071e0) Reply frame received for 5\nI0321 21:20:16.394344 663 log.go:172] (0xc0001071e0) (0xc0009ee460) Create stream\nI0321 21:20:16.394356 663 log.go:172] (0xc0001071e0) (0xc0009ee460) Stream added, broadcasting: 7\nI0321 21:20:16.395241 663 log.go:172] (0xc0001071e0) Reply frame received for 7\nI0321 21:20:16.395365 663 log.go:172] (0xc0006af900) (3) Writing data frame\nI0321 21:20:16.395492 663 log.go:172] (0xc0006af900) (3) Writing data frame\nI0321 21:20:16.396387 663 log.go:172] (0xc0001071e0) Data frame received for 5\nI0321 21:20:16.396413 663 log.go:172] (0xc000139360) (5) Data frame handling\nI0321 21:20:16.396441 663 log.go:172] (0xc000139360) (5) Data frame sent\nI0321 21:20:16.397373 663 log.go:172] (0xc0001071e0) Data frame received for 5\nI0321 21:20:16.397392 663 log.go:172] (0xc000139360) (5) Data frame handling\nI0321 21:20:16.397410 663 log.go:172] (0xc000139360) (5) Data frame sent\nI0321 21:20:16.455952 663 log.go:172] (0xc0001071e0) Data frame received for 5\nI0321 21:20:16.455967 663 log.go:172] (0xc000139360) (5) Data frame handling\nI0321 21:20:16.455996 663 log.go:172] (0xc0001071e0) Data frame received for 7\nI0321 21:20:16.456017 663 log.go:172] (0xc0009ee460) (7) Data frame handling\nI0321 21:20:16.456507 663 log.go:172] (0xc0001071e0) Data frame received for 1\nI0321 21:20:16.456590 663 log.go:172] (0xc0009ee3c0) (1) Data frame handling\nI0321 21:20:16.456616 663 log.go:172] (0xc0009ee3c0) (1) Data frame sent\nI0321 21:20:16.456631 663 log.go:172] (0xc0001071e0) (0xc0009ee3c0) Stream removed, broadcasting: 1\nI0321 21:20:16.456724 663 log.go:172] (0xc0001071e0) (0xc0006af900) Stream removed, broadcasting: 3\nI0321 21:20:16.456796 663 log.go:172] (0xc0001071e0) Go away received\nI0321 21:20:16.456982 663 log.go:172] (0xc0001071e0) (0xc0009ee3c0) Stream removed, broadcasting: 1\nI0321 21:20:16.457048 663 log.go:172] (0xc0001071e0) (0xc0006af900) Stream removed, broadcasting: 3\nI0321 21:20:16.457070 663 log.go:172] (0xc0001071e0) (0xc000139360) Stream removed, broadcasting: 5\nI0321 21:20:16.457079 663 log.go:172] (0xc0001071e0) (0xc0009ee460) Stream removed, broadcasting: 7\n" Mar 21 21:20:16.476: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:18.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8326" for this suite. • [SLOW TEST:5.524 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1944 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":48,"skipped":833,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:18.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:20:19.166: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:20:21.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422419, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422419, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422419, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422419, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:20:23.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422419, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422419, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422419, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422419, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:20:26.220: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:26.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5783" for this suite. STEP: Destroying namespace "webhook-5783-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.862 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":49,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:26.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 21 21:20:26.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 21 21:20:26.914: INFO: stderr: "" Mar 21 21:20:26.914: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:26.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5302" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":50,"skipped":878,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:26.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 21 21:20:27.242: INFO: Waiting up to 5m0s for pod "pod-1c594108-4cfa-4d28-914b-1cf9b6fa4d86" in namespace "emptydir-8387" to be "success or failure" Mar 21 21:20:27.250: INFO: Pod "pod-1c594108-4cfa-4d28-914b-1cf9b6fa4d86": Phase="Pending", Reason="", readiness=false. Elapsed: 8.594143ms Mar 21 21:20:29.254: INFO: Pod "pod-1c594108-4cfa-4d28-914b-1cf9b6fa4d86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012387624s Mar 21 21:20:31.277: INFO: Pod "pod-1c594108-4cfa-4d28-914b-1cf9b6fa4d86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035833732s STEP: Saw pod success Mar 21 21:20:31.277: INFO: Pod "pod-1c594108-4cfa-4d28-914b-1cf9b6fa4d86" satisfied condition "success or failure" Mar 21 21:20:31.292: INFO: Trying to get logs from node jerma-worker2 pod pod-1c594108-4cfa-4d28-914b-1cf9b6fa4d86 container test-container: STEP: delete the pod Mar 21 21:20:31.346: INFO: Waiting for pod pod-1c594108-4cfa-4d28-914b-1cf9b6fa4d86 to disappear Mar 21 21:20:31.373: INFO: Pod pod-1c594108-4cfa-4d28-914b-1cf9b6fa4d86 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:31.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8387" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":884,"failed":0} SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:31.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-6de1376f-7a9c-4da1-b315-391fcbe9a478 STEP: Creating secret with name secret-projected-all-test-volume-84809a3b-c6ac-4217-9df0-8630888f2525 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 21 21:20:31.522: INFO: Waiting up to 5m0s for pod "projected-volume-b59e360b-8fdc-4c39-a6cf-7cd6f8f31378" in namespace "projected-8027" to be "success or failure" Mar 21 21:20:31.554: INFO: Pod "projected-volume-b59e360b-8fdc-4c39-a6cf-7cd6f8f31378": Phase="Pending", Reason="", readiness=false. Elapsed: 31.899272ms Mar 21 21:20:33.558: INFO: Pod "projected-volume-b59e360b-8fdc-4c39-a6cf-7cd6f8f31378": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035943975s Mar 21 21:20:35.562: INFO: Pod "projected-volume-b59e360b-8fdc-4c39-a6cf-7cd6f8f31378": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040275799s STEP: Saw pod success Mar 21 21:20:35.562: INFO: Pod "projected-volume-b59e360b-8fdc-4c39-a6cf-7cd6f8f31378" satisfied condition "success or failure" Mar 21 21:20:35.565: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-b59e360b-8fdc-4c39-a6cf-7cd6f8f31378 container projected-all-volume-test: STEP: delete the pod Mar 21 21:20:35.581: INFO: Waiting for pod projected-volume-b59e360b-8fdc-4c39-a6cf-7cd6f8f31378 to disappear Mar 21 21:20:35.586: INFO: Pod projected-volume-b59e360b-8fdc-4c39-a6cf-7cd6f8f31378 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:35.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8027" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":52,"skipped":887,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:35.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 21 21:20:35.711: INFO: Waiting up to 5m0s for pod "pod-02ccf78f-2150-4df2-83fa-72a82f9b1392" in namespace "emptydir-3964" to be "success or failure" Mar 21 21:20:35.723: INFO: Pod "pod-02ccf78f-2150-4df2-83fa-72a82f9b1392": Phase="Pending", Reason="", readiness=false. Elapsed: 11.771358ms Mar 21 21:20:37.727: INFO: Pod "pod-02ccf78f-2150-4df2-83fa-72a82f9b1392": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015635879s Mar 21 21:20:39.731: INFO: Pod "pod-02ccf78f-2150-4df2-83fa-72a82f9b1392": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019971451s STEP: Saw pod success Mar 21 21:20:39.731: INFO: Pod "pod-02ccf78f-2150-4df2-83fa-72a82f9b1392" satisfied condition "success or failure" Mar 21 21:20:39.735: INFO: Trying to get logs from node jerma-worker2 pod pod-02ccf78f-2150-4df2-83fa-72a82f9b1392 container test-container: STEP: delete the pod Mar 21 21:20:39.771: INFO: Waiting for pod pod-02ccf78f-2150-4df2-83fa-72a82f9b1392 to disappear Mar 21 21:20:39.783: INFO: Pod pod-02ccf78f-2150-4df2-83fa-72a82f9b1392 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:39.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3964" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:39.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 21 21:20:39.869: INFO: Waiting up to 5m0s for pod "client-containers-89cce82a-5cf1-4846-b321-3a30e3bc5fae" in namespace "containers-3778" to be "success or failure" Mar 21 21:20:39.873: INFO: Pod "client-containers-89cce82a-5cf1-4846-b321-3a30e3bc5fae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.080687ms Mar 21 21:20:41.877: INFO: Pod "client-containers-89cce82a-5cf1-4846-b321-3a30e3bc5fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007131993s Mar 21 21:20:43.881: INFO: Pod "client-containers-89cce82a-5cf1-4846-b321-3a30e3bc5fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011548039s STEP: Saw pod success Mar 21 21:20:43.881: INFO: Pod "client-containers-89cce82a-5cf1-4846-b321-3a30e3bc5fae" satisfied condition "success or failure" Mar 21 21:20:43.885: INFO: Trying to get logs from node jerma-worker pod client-containers-89cce82a-5cf1-4846-b321-3a30e3bc5fae container test-container: STEP: delete the pod Mar 21 21:20:43.923: INFO: Waiting for pod client-containers-89cce82a-5cf1-4846-b321-3a30e3bc5fae to disappear Mar 21 21:20:43.927: INFO: Pod client-containers-89cce82a-5cf1-4846-b321-3a30e3bc5fae no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:20:43.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3778" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":931,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:20:43.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:15.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-353" for this suite. STEP: Destroying namespace "nsdeletetest-2897" for this suite. Mar 21 21:21:15.173: INFO: Namespace nsdeletetest-2897 was already deleted STEP: Destroying namespace "nsdeletetest-3078" for this suite. • [SLOW TEST:31.241 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":55,"skipped":959,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:15.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-83bef3f0-506a-40c2-ae68-6ff31f64ad93 STEP: Creating a pod to test consume secrets Mar 21 21:21:15.229: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b2c53d9b-2585-44e1-89d2-c8abb30de8db" in namespace "projected-5818" to be "success or failure" Mar 21 21:21:15.250: INFO: Pod "pod-projected-secrets-b2c53d9b-2585-44e1-89d2-c8abb30de8db": Phase="Pending", Reason="", readiness=false. Elapsed: 21.124048ms Mar 21 21:21:17.254: INFO: Pod "pod-projected-secrets-b2c53d9b-2585-44e1-89d2-c8abb30de8db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024782317s Mar 21 21:21:19.257: INFO: Pod "pod-projected-secrets-b2c53d9b-2585-44e1-89d2-c8abb30de8db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027511859s STEP: Saw pod success Mar 21 21:21:19.257: INFO: Pod "pod-projected-secrets-b2c53d9b-2585-44e1-89d2-c8abb30de8db" satisfied condition "success or failure" Mar 21 21:21:19.259: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b2c53d9b-2585-44e1-89d2-c8abb30de8db container projected-secret-volume-test: STEP: delete the pod Mar 21 21:21:19.301: INFO: Waiting for pod pod-projected-secrets-b2c53d9b-2585-44e1-89d2-c8abb30de8db to disappear Mar 21 21:21:19.304: INFO: Pod pod-projected-secrets-b2c53d9b-2585-44e1-89d2-c8abb30de8db no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:19.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5818" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":982,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:19.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-63b6683d-687b-4ab6-b56d-19ff6cdf2c42 STEP: Creating a pod to test consume configMaps Mar 21 21:21:19.367: INFO: Waiting up to 5m0s for pod "pod-configmaps-e28724b0-d748-4508-9ee8-0e74b0567552" in namespace "configmap-4167" to be "success or failure" Mar 21 21:21:19.388: INFO: Pod "pod-configmaps-e28724b0-d748-4508-9ee8-0e74b0567552": Phase="Pending", Reason="", readiness=false. Elapsed: 20.610128ms Mar 21 21:21:21.421: INFO: Pod "pod-configmaps-e28724b0-d748-4508-9ee8-0e74b0567552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054005354s Mar 21 21:21:23.433: INFO: Pod "pod-configmaps-e28724b0-d748-4508-9ee8-0e74b0567552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06552218s STEP: Saw pod success Mar 21 21:21:23.433: INFO: Pod "pod-configmaps-e28724b0-d748-4508-9ee8-0e74b0567552" satisfied condition "success or failure" Mar 21 21:21:23.457: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e28724b0-d748-4508-9ee8-0e74b0567552 container configmap-volume-test: STEP: delete the pod Mar 21 21:21:23.498: INFO: Waiting for pod pod-configmaps-e28724b0-d748-4508-9ee8-0e74b0567552 to disappear Mar 21 21:21:23.521: INFO: Pod pod-configmaps-e28724b0-d748-4508-9ee8-0e74b0567552 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:23.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4167" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1007,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:23.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-a9144469-49e7-4e18-9844-93cf70a80780 STEP: Creating a pod to test consume secrets Mar 21 21:21:23.625: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f840ae83-e884-405d-bf4d-7c99fdd201d6" in namespace "projected-1260" to be "success or failure" Mar 21 21:21:23.658: INFO: Pod "pod-projected-secrets-f840ae83-e884-405d-bf4d-7c99fdd201d6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.345794ms Mar 21 21:21:25.663: INFO: Pod "pod-projected-secrets-f840ae83-e884-405d-bf4d-7c99fdd201d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038185243s Mar 21 21:21:27.667: INFO: Pod "pod-projected-secrets-f840ae83-e884-405d-bf4d-7c99fdd201d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042494581s STEP: Saw pod success Mar 21 21:21:27.667: INFO: Pod "pod-projected-secrets-f840ae83-e884-405d-bf4d-7c99fdd201d6" satisfied condition "success or failure" Mar 21 21:21:27.671: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-f840ae83-e884-405d-bf4d-7c99fdd201d6 container secret-volume-test: STEP: delete the pod Mar 21 21:21:27.690: INFO: Waiting for pod pod-projected-secrets-f840ae83-e884-405d-bf4d-7c99fdd201d6 to disappear Mar 21 21:21:27.694: INFO: Pod pod-projected-secrets-f840ae83-e884-405d-bf4d-7c99fdd201d6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:27.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1260" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1017,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:27.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:21:28.300: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:21:30.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422488, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422488, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422488, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422488, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:21:32.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422488, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422488, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422488, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720422488, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:21:35.391: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:36.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1221" for this suite. STEP: Destroying namespace "webhook-1221-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.433 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":59,"skipped":1024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:36.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1c31621e-b197-435f-bba0-b1d33291de28 STEP: Creating a pod to test consume configMaps Mar 21 21:21:36.255: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-99b0baf8-bf78-4e3d-adb4-8967730ff71f" in namespace "projected-7387" to be "success or failure" Mar 21 21:21:36.264: INFO: Pod "pod-projected-configmaps-99b0baf8-bf78-4e3d-adb4-8967730ff71f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.243056ms Mar 21 21:21:38.282: INFO: Pod "pod-projected-configmaps-99b0baf8-bf78-4e3d-adb4-8967730ff71f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027318083s Mar 21 21:21:40.287: INFO: Pod "pod-projected-configmaps-99b0baf8-bf78-4e3d-adb4-8967730ff71f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031574397s STEP: Saw pod success Mar 21 21:21:40.287: INFO: Pod "pod-projected-configmaps-99b0baf8-bf78-4e3d-adb4-8967730ff71f" satisfied condition "success or failure" Mar 21 21:21:40.290: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-99b0baf8-bf78-4e3d-adb4-8967730ff71f container projected-configmap-volume-test: STEP: delete the pod Mar 21 21:21:40.307: INFO: Waiting for pod pod-projected-configmaps-99b0baf8-bf78-4e3d-adb4-8967730ff71f to disappear Mar 21 21:21:40.311: INFO: Pod pod-projected-configmaps-99b0baf8-bf78-4e3d-adb4-8967730ff71f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:40.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7387" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1057,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:40.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:21:40.374: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d69ae7aa-5b87-46f3-9afe-307f917c316e" in namespace "security-context-test-2251" to be "success or failure" Mar 21 21:21:40.392: INFO: Pod "busybox-readonly-false-d69ae7aa-5b87-46f3-9afe-307f917c316e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.151842ms Mar 21 21:21:42.398: INFO: Pod "busybox-readonly-false-d69ae7aa-5b87-46f3-9afe-307f917c316e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023903012s Mar 21 21:21:44.402: INFO: Pod "busybox-readonly-false-d69ae7aa-5b87-46f3-9afe-307f917c316e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028242804s Mar 21 21:21:44.402: INFO: Pod "busybox-readonly-false-d69ae7aa-5b87-46f3-9afe-307f917c316e" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:44.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2251" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1063,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:44.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:21:44.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dd1a91b-f23e-4a42-919f-5218c398f949" in namespace "downward-api-9807" to be "success or failure" Mar 21 21:21:44.528: INFO: Pod "downwardapi-volume-3dd1a91b-f23e-4a42-919f-5218c398f949": Phase="Pending", Reason="", readiness=false. Elapsed: 7.745845ms Mar 21 21:21:46.532: INFO: Pod "downwardapi-volume-3dd1a91b-f23e-4a42-919f-5218c398f949": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011954675s Mar 21 21:21:48.536: INFO: Pod "downwardapi-volume-3dd1a91b-f23e-4a42-919f-5218c398f949": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016372364s STEP: Saw pod success Mar 21 21:21:48.537: INFO: Pod "downwardapi-volume-3dd1a91b-f23e-4a42-919f-5218c398f949" satisfied condition "success or failure" Mar 21 21:21:48.540: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3dd1a91b-f23e-4a42-919f-5218c398f949 container client-container: STEP: delete the pod Mar 21 21:21:48.565: INFO: Waiting for pod downwardapi-volume-3dd1a91b-f23e-4a42-919f-5218c398f949 to disappear Mar 21 21:21:48.575: INFO: Pod downwardapi-volume-3dd1a91b-f23e-4a42-919f-5218c398f949 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:48.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9807" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1069,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:48.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-36ad4e96-b7d6-4039-aad2-795302609cb0 STEP: Creating a pod to test consume configMaps Mar 21 21:21:48.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-047edb27-dca5-4cab-be40-3922610862c9" in namespace "configmap-6453" to be "success or failure" Mar 21 21:21:48.686: INFO: Pod "pod-configmaps-047edb27-dca5-4cab-be40-3922610862c9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.296016ms Mar 21 21:21:50.691: INFO: Pod "pod-configmaps-047edb27-dca5-4cab-be40-3922610862c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021693658s Mar 21 21:21:52.695: INFO: Pod "pod-configmaps-047edb27-dca5-4cab-be40-3922610862c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02594724s STEP: Saw pod success Mar 21 21:21:52.695: INFO: Pod "pod-configmaps-047edb27-dca5-4cab-be40-3922610862c9" satisfied condition "success or failure" Mar 21 21:21:52.698: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-047edb27-dca5-4cab-be40-3922610862c9 container configmap-volume-test: STEP: delete the pod Mar 21 21:21:52.714: INFO: Waiting for pod pod-configmaps-047edb27-dca5-4cab-be40-3922610862c9 to disappear Mar 21 21:21:52.751: INFO: Pod pod-configmaps-047edb27-dca5-4cab-be40-3922610862c9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:21:52.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6453" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:21:52.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8120.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8120.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8120.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8120.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8120.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8120.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 21:21:58.895: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:21:58.898: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:21:58.900: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:21:58.903: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:21:58.911: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:21:58.913: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:21:58.915: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:21:58.918: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:21:58.923: INFO: Lookups using dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local] Mar 21 21:22:03.928: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:03.932: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:03.936: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:03.940: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:03.949: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:03.952: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:03.955: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:03.958: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:03.964: INFO: Lookups using dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local] Mar 21 21:22:08.928: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:08.932: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:08.936: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:08.939: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:08.948: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:08.950: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:08.952: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:08.955: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:08.959: INFO: Lookups using dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local] Mar 21 21:22:13.927: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:13.931: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:13.934: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:13.937: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:13.946: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:13.950: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:13.952: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:13.956: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:13.962: INFO: Lookups using dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local] Mar 21 21:22:18.928: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:18.932: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:18.935: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:18.939: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:18.949: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:18.952: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:18.954: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:18.957: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:18.963: INFO: Lookups using dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local] Mar 21 21:22:23.927: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:23.931: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:23.934: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:23.938: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:23.945: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:23.947: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:23.949: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:23.952: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local from pod dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689: the server could not find the requested resource (get pods dns-test-980c61cc-72b9-4357-bc43-a137473a2689) Mar 21 21:22:23.957: INFO: Lookups using dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8120.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8120.svc.cluster.local jessie_udp@dns-test-service-2.dns-8120.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8120.svc.cluster.local] Mar 21 21:22:28.966: INFO: DNS probes using dns-8120/dns-test-980c61cc-72b9-4357-bc43-a137473a2689 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:22:29.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8120" for this suite. • [SLOW TEST:36.485 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":64,"skipped":1116,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:22:29.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:22:36.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7122" for this suite. • [SLOW TEST:7.375 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":65,"skipped":1118,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:22:36.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 21 21:22:36.705: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7223 /api/v1/namespaces/watch-7223/configmaps/e2e-watch-test-watch-closed e71cef99-3dad-4b9b-9778-82de69a57a49 1645892 0 2020-03-21 21:22:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 21:22:36.705: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7223 /api/v1/namespaces/watch-7223/configmaps/e2e-watch-test-watch-closed e71cef99-3dad-4b9b-9778-82de69a57a49 1645893 0 2020-03-21 21:22:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 21 21:22:36.775: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7223 /api/v1/namespaces/watch-7223/configmaps/e2e-watch-test-watch-closed e71cef99-3dad-4b9b-9778-82de69a57a49 1645895 0 2020-03-21 21:22:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 21:22:36.776: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7223 /api/v1/namespaces/watch-7223/configmaps/e2e-watch-test-watch-closed e71cef99-3dad-4b9b-9778-82de69a57a49 1645897 0 2020-03-21 21:22:36 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:22:36.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7223" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":66,"skipped":1130,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:22:36.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8538 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 21 21:22:36.882: INFO: Found 0 stateful pods, waiting for 3 Mar 21 21:22:46.887: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:22:46.887: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:22:46.887: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 21 21:22:56.887: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:22:56.887: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:22:56.887: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 21 21:22:56.913: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 21 21:23:06.948: INFO: Updating stateful set ss2 Mar 21 21:23:06.974: INFO: Waiting for Pod statefulset-8538/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 21 21:23:17.268: INFO: Found 2 stateful pods, waiting for 3 Mar 21 21:23:27.273: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:23:27.273: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:23:27.273: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 21 21:23:27.297: INFO: Updating stateful set ss2 Mar 21 21:23:27.309: INFO: Waiting for Pod statefulset-8538/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 21 21:23:37.338: INFO: Updating stateful set ss2 Mar 21 21:23:37.350: INFO: Waiting for StatefulSet statefulset-8538/ss2 to complete update Mar 21 21:23:37.351: INFO: Waiting for Pod statefulset-8538/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 21 21:23:47.358: INFO: Deleting all statefulset in ns statefulset-8538 Mar 21 21:23:47.361: INFO: Scaling statefulset ss2 to 0 Mar 21 21:24:07.388: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 21:24:07.395: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:24:07.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8538" for this suite. • [SLOW TEST:90.599 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":67,"skipped":1142,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:24:07.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:24:07.616: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"83f60416-0470-41ed-a7ab-4c042c2ec896", Controller:(*bool)(0xc004fd2a12), BlockOwnerDeletion:(*bool)(0xc004fd2a13)}} Mar 21 21:24:07.626: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"dbebf78a-8cbb-4c2a-b080-a0db24af2fd7", Controller:(*bool)(0xc0052041aa), BlockOwnerDeletion:(*bool)(0xc0052041ab)}} Mar 21 21:24:07.649: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"155a9d12-77cb-41e2-8d9e-c20c6febdc8e", Controller:(*bool)(0xc004ff948a), BlockOwnerDeletion:(*bool)(0xc004ff948b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:24:12.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9999" for this suite. • [SLOW TEST:5.307 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":68,"skipped":1149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:24:12.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:24:12.842: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 21.680098ms) Mar 21 21:24:12.845: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.565079ms) Mar 21 21:24:12.862: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 17.169163ms) Mar 21 21:24:12.874: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 12.268714ms) Mar 21 21:24:12.897: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 22.75832ms) Mar 21 21:24:12.904: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 6.827653ms) Mar 21 21:24:12.910: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.921883ms) Mar 21 21:24:12.916: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.574966ms) Mar 21 21:24:12.922: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 6.249581ms) Mar 21 21:24:12.928: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.907089ms) Mar 21 21:24:12.934: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.680039ms) Mar 21 21:24:12.939: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.807837ms) Mar 21 21:24:12.946: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 6.139121ms) Mar 21 21:24:12.951: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.587788ms) Mar 21 21:24:12.969: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 18.011714ms) Mar 21 21:24:13.032: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 62.482866ms) Mar 21 21:24:13.054: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 22.325631ms) Mar 21 21:24:13.066: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 11.434714ms) Mar 21 21:24:13.077: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 11.470706ms) Mar 21 21:24:13.090: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 12.543146ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:24:13.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7378" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":69,"skipped":1200,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:24:13.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:25:13.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9651" for this suite. • [SLOW TEST:60.105 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1203,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:25:13.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8885 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-8885 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8885 Mar 21 21:25:13.299: INFO: Found 0 stateful pods, waiting for 1 Mar 21 21:25:23.303: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 21 21:25:23.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 21:25:25.899: INFO: stderr: "I0321 21:25:25.779683 707 log.go:172] (0xc0000f62c0) (0xc0006e2000) Create stream\nI0321 21:25:25.779727 707 log.go:172] (0xc0000f62c0) (0xc0006e2000) Stream added, broadcasting: 1\nI0321 21:25:25.783044 707 log.go:172] (0xc0000f62c0) Reply frame received for 1\nI0321 21:25:25.783107 707 log.go:172] (0xc0000f62c0) (0xc000601cc0) Create stream\nI0321 21:25:25.783133 707 log.go:172] (0xc0000f62c0) (0xc000601cc0) Stream added, broadcasting: 3\nI0321 21:25:25.783977 707 log.go:172] (0xc0000f62c0) Reply frame received for 3\nI0321 21:25:25.784010 707 log.go:172] (0xc0000f62c0) (0xc0006e20a0) Create stream\nI0321 21:25:25.784018 707 log.go:172] (0xc0000f62c0) (0xc0006e20a0) Stream added, broadcasting: 5\nI0321 21:25:25.784844 707 log.go:172] (0xc0000f62c0) Reply frame received for 5\nI0321 21:25:25.864794 707 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0321 21:25:25.864814 707 log.go:172] (0xc0006e20a0) (5) Data frame handling\nI0321 21:25:25.864825 707 log.go:172] (0xc0006e20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 21:25:25.890638 707 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0321 21:25:25.890684 707 log.go:172] (0xc000601cc0) (3) Data frame handling\nI0321 21:25:25.890717 707 log.go:172] (0xc000601cc0) (3) Data frame sent\nI0321 21:25:25.891064 707 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0321 21:25:25.891131 707 log.go:172] (0xc0006e20a0) (5) Data frame handling\nI0321 21:25:25.891165 707 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0321 21:25:25.891194 707 log.go:172] (0xc000601cc0) (3) Data frame handling\nI0321 21:25:25.892953 707 log.go:172] (0xc0000f62c0) Data frame received for 1\nI0321 21:25:25.892972 707 log.go:172] (0xc0006e2000) (1) Data frame handling\nI0321 21:25:25.892984 707 log.go:172] (0xc0006e2000) (1) Data frame sent\nI0321 21:25:25.892999 707 log.go:172] (0xc0000f62c0) (0xc0006e2000) Stream removed, broadcasting: 1\nI0321 21:25:25.893035 707 log.go:172] (0xc0000f62c0) Go away received\nI0321 21:25:25.893495 707 log.go:172] (0xc0000f62c0) (0xc0006e2000) Stream removed, broadcasting: 1\nI0321 21:25:25.893514 707 log.go:172] (0xc0000f62c0) (0xc000601cc0) Stream removed, broadcasting: 3\nI0321 21:25:25.893524 707 log.go:172] (0xc0000f62c0) (0xc0006e20a0) Stream removed, broadcasting: 5\n" Mar 21 21:25:25.899: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 21:25:25.899: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 21:25:25.922: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 21 21:25:35.926: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 21:25:35.926: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 21:25:35.994: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:25:35.994: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:25:35.994: INFO: Mar 21 21:25:35.994: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 21 21:25:36.998: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.93890757s Mar 21 21:25:38.002: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.934712425s Mar 21 21:25:39.102: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.930850206s Mar 21 21:25:40.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.831034037s Mar 21 21:25:41.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.825918064s Mar 21 21:25:42.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.820922826s Mar 21 21:25:43.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.815111112s Mar 21 21:25:44.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.810320511s Mar 21 21:25:45.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 804.554505ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8885 Mar 21 21:25:46.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:25:46.366: INFO: stderr: "I0321 21:25:46.263896 742 log.go:172] (0xc00010b3f0) (0xc000ab2000) Create stream\nI0321 21:25:46.263963 742 log.go:172] (0xc00010b3f0) (0xc000ab2000) Stream added, broadcasting: 1\nI0321 21:25:46.267321 742 log.go:172] (0xc00010b3f0) Reply frame received for 1\nI0321 21:25:46.267374 742 log.go:172] (0xc00010b3f0) (0xc000ab20a0) Create stream\nI0321 21:25:46.267390 742 log.go:172] (0xc00010b3f0) (0xc000ab20a0) Stream added, broadcasting: 3\nI0321 21:25:46.268616 742 log.go:172] (0xc00010b3f0) Reply frame received for 3\nI0321 21:25:46.268667 742 log.go:172] (0xc00010b3f0) (0xc00099c000) Create stream\nI0321 21:25:46.268683 742 log.go:172] (0xc00010b3f0) (0xc00099c000) Stream added, broadcasting: 5\nI0321 21:25:46.270006 742 log.go:172] (0xc00010b3f0) Reply frame received for 5\nI0321 21:25:46.360842 742 log.go:172] (0xc00010b3f0) Data frame received for 3\nI0321 21:25:46.360896 742 log.go:172] (0xc000ab20a0) (3) Data frame handling\nI0321 21:25:46.360931 742 log.go:172] (0xc000ab20a0) (3) Data frame sent\nI0321 21:25:46.360949 742 log.go:172] (0xc00010b3f0) Data frame received for 3\nI0321 21:25:46.360962 742 log.go:172] (0xc000ab20a0) (3) Data frame handling\nI0321 21:25:46.361023 742 log.go:172] (0xc00010b3f0) Data frame received for 5\nI0321 21:25:46.361042 742 log.go:172] (0xc00099c000) (5) Data frame handling\nI0321 21:25:46.361068 742 log.go:172] (0xc00099c000) (5) Data frame sent\nI0321 21:25:46.361088 742 log.go:172] (0xc00010b3f0) Data frame received for 5\nI0321 21:25:46.361104 742 log.go:172] (0xc00099c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0321 21:25:46.362698 742 log.go:172] (0xc00010b3f0) Data frame received for 1\nI0321 21:25:46.362716 742 log.go:172] (0xc000ab2000) (1) Data frame handling\nI0321 21:25:46.362726 742 log.go:172] (0xc000ab2000) (1) Data frame sent\nI0321 21:25:46.362746 742 log.go:172] (0xc00010b3f0) (0xc000ab2000) Stream removed, broadcasting: 1\nI0321 21:25:46.362787 742 log.go:172] (0xc00010b3f0) Go away received\nI0321 21:25:46.363211 742 log.go:172] (0xc00010b3f0) (0xc000ab2000) Stream removed, broadcasting: 1\nI0321 21:25:46.363237 742 log.go:172] (0xc00010b3f0) (0xc000ab20a0) Stream removed, broadcasting: 3\nI0321 21:25:46.363247 742 log.go:172] (0xc00010b3f0) (0xc00099c000) Stream removed, broadcasting: 5\n" Mar 21 21:25:46.367: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 21:25:46.367: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 21:25:46.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:25:46.602: INFO: stderr: "I0321 21:25:46.518939 764 log.go:172] (0xc000af91e0) (0xc000942640) Create stream\nI0321 21:25:46.519001 764 log.go:172] (0xc000af91e0) (0xc000942640) Stream added, broadcasting: 1\nI0321 21:25:46.524084 764 log.go:172] (0xc000af91e0) Reply frame received for 1\nI0321 21:25:46.524154 764 log.go:172] (0xc000af91e0) (0xc0005f25a0) Create stream\nI0321 21:25:46.524189 764 log.go:172] (0xc000af91e0) (0xc0005f25a0) Stream added, broadcasting: 3\nI0321 21:25:46.525415 764 log.go:172] (0xc000af91e0) Reply frame received for 3\nI0321 21:25:46.525467 764 log.go:172] (0xc000af91e0) (0xc0006f7360) Create stream\nI0321 21:25:46.525481 764 log.go:172] (0xc000af91e0) (0xc0006f7360) Stream added, broadcasting: 5\nI0321 21:25:46.526618 764 log.go:172] (0xc000af91e0) Reply frame received for 5\nI0321 21:25:46.596738 764 log.go:172] (0xc000af91e0) Data frame received for 3\nI0321 21:25:46.596784 764 log.go:172] (0xc0005f25a0) (3) Data frame handling\nI0321 21:25:46.596810 764 log.go:172] (0xc0005f25a0) (3) Data frame sent\nI0321 21:25:46.596829 764 log.go:172] (0xc000af91e0) Data frame received for 3\nI0321 21:25:46.596847 764 log.go:172] (0xc0005f25a0) (3) Data frame handling\nI0321 21:25:46.596864 764 log.go:172] (0xc000af91e0) Data frame received for 5\nI0321 21:25:46.596877 764 log.go:172] (0xc0006f7360) (5) Data frame handling\nI0321 21:25:46.596897 764 log.go:172] (0xc0006f7360) (5) Data frame sent\nI0321 21:25:46.596915 764 log.go:172] (0xc000af91e0) Data frame received for 5\nI0321 21:25:46.596931 764 log.go:172] (0xc0006f7360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0321 21:25:46.598706 764 log.go:172] (0xc000af91e0) Data frame received for 1\nI0321 21:25:46.598720 764 log.go:172] (0xc000942640) (1) Data frame handling\nI0321 21:25:46.598726 764 log.go:172] (0xc000942640) (1) Data frame sent\nI0321 21:25:46.598734 764 log.go:172] (0xc000af91e0) (0xc000942640) Stream removed, broadcasting: 1\nI0321 21:25:46.598783 764 log.go:172] (0xc000af91e0) Go away received\nI0321 21:25:46.598935 764 log.go:172] (0xc000af91e0) (0xc000942640) Stream removed, broadcasting: 1\nI0321 21:25:46.598947 764 log.go:172] (0xc000af91e0) (0xc0005f25a0) Stream removed, broadcasting: 3\nI0321 21:25:46.598952 764 log.go:172] (0xc000af91e0) (0xc0006f7360) Stream removed, broadcasting: 5\n" Mar 21 21:25:46.602: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 21:25:46.602: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 21:25:46.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:25:46.810: INFO: stderr: "I0321 21:25:46.739934 785 log.go:172] (0xc000557130) (0xc0006e7b80) Create stream\nI0321 21:25:46.740000 785 log.go:172] (0xc000557130) (0xc0006e7b80) Stream added, broadcasting: 1\nI0321 21:25:46.742950 785 log.go:172] (0xc000557130) Reply frame received for 1\nI0321 21:25:46.743011 785 log.go:172] (0xc000557130) (0xc0009d8000) Create stream\nI0321 21:25:46.743026 785 log.go:172] (0xc000557130) (0xc0009d8000) Stream added, broadcasting: 3\nI0321 21:25:46.744170 785 log.go:172] (0xc000557130) Reply frame received for 3\nI0321 21:25:46.744210 785 log.go:172] (0xc000557130) (0xc0006e7d60) Create stream\nI0321 21:25:46.744222 785 log.go:172] (0xc000557130) (0xc0006e7d60) Stream added, broadcasting: 5\nI0321 21:25:46.745584 785 log.go:172] (0xc000557130) Reply frame received for 5\nI0321 21:25:46.804934 785 log.go:172] (0xc000557130) Data frame received for 5\nI0321 21:25:46.804959 785 log.go:172] (0xc0006e7d60) (5) Data frame handling\nI0321 21:25:46.804967 785 log.go:172] (0xc0006e7d60) (5) Data frame sent\nI0321 21:25:46.804973 785 log.go:172] (0xc000557130) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0321 21:25:46.804978 785 log.go:172] (0xc0006e7d60) (5) Data frame handling\nI0321 21:25:46.805053 785 log.go:172] (0xc000557130) Data frame received for 3\nI0321 21:25:46.805093 785 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0321 21:25:46.805217 785 log.go:172] (0xc0009d8000) (3) Data frame sent\nI0321 21:25:46.805245 785 log.go:172] (0xc000557130) Data frame received for 3\nI0321 21:25:46.805255 785 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0321 21:25:46.806640 785 log.go:172] (0xc000557130) Data frame received for 1\nI0321 21:25:46.806657 785 log.go:172] (0xc0006e7b80) (1) Data frame handling\nI0321 21:25:46.806666 785 log.go:172] (0xc0006e7b80) (1) Data frame sent\nI0321 21:25:46.806679 785 log.go:172] (0xc000557130) (0xc0006e7b80) Stream removed, broadcasting: 1\nI0321 21:25:46.806692 785 log.go:172] (0xc000557130) Go away received\nI0321 21:25:46.806984 785 log.go:172] (0xc000557130) (0xc0006e7b80) Stream removed, broadcasting: 1\nI0321 21:25:46.807001 785 log.go:172] (0xc000557130) (0xc0009d8000) Stream removed, broadcasting: 3\nI0321 21:25:46.807008 785 log.go:172] (0xc000557130) (0xc0006e7d60) Stream removed, broadcasting: 5\n" Mar 21 21:25:46.810: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 21:25:46.810: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 21:25:46.832: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:25:46.832: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:25:46.832: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 21 21:25:46.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 21:25:47.026: INFO: stderr: "I0321 21:25:46.972002 807 log.go:172] (0xc000902b00) (0xc0008de000) Create stream\nI0321 21:25:46.972101 807 log.go:172] (0xc000902b00) (0xc0008de000) Stream added, broadcasting: 1\nI0321 21:25:46.974784 807 log.go:172] (0xc000902b00) Reply frame received for 1\nI0321 21:25:46.974827 807 log.go:172] (0xc000902b00) (0xc00064bc20) Create stream\nI0321 21:25:46.974853 807 log.go:172] (0xc000902b00) (0xc00064bc20) Stream added, broadcasting: 3\nI0321 21:25:46.975778 807 log.go:172] (0xc000902b00) Reply frame received for 3\nI0321 21:25:46.975810 807 log.go:172] (0xc000902b00) (0xc00064be00) Create stream\nI0321 21:25:46.975818 807 log.go:172] (0xc000902b00) (0xc00064be00) Stream added, broadcasting: 5\nI0321 21:25:46.976393 807 log.go:172] (0xc000902b00) Reply frame received for 5\nI0321 21:25:47.019811 807 log.go:172] (0xc000902b00) Data frame received for 5\nI0321 21:25:47.019889 807 log.go:172] (0xc00064be00) (5) Data frame handling\nI0321 21:25:47.019927 807 log.go:172] (0xc00064be00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 21:25:47.019970 807 log.go:172] (0xc000902b00) Data frame received for 3\nI0321 21:25:47.020011 807 log.go:172] (0xc00064bc20) (3) Data frame handling\nI0321 21:25:47.020036 807 log.go:172] (0xc000902b00) Data frame received for 5\nI0321 21:25:47.020069 807 log.go:172] (0xc00064be00) (5) Data frame handling\nI0321 21:25:47.020107 807 log.go:172] (0xc00064bc20) (3) Data frame sent\nI0321 21:25:47.020131 807 log.go:172] (0xc000902b00) Data frame received for 3\nI0321 21:25:47.020150 807 log.go:172] (0xc00064bc20) (3) Data frame handling\nI0321 21:25:47.021741 807 log.go:172] (0xc000902b00) Data frame received for 1\nI0321 21:25:47.021769 807 log.go:172] (0xc0008de000) (1) Data frame handling\nI0321 21:25:47.021792 807 log.go:172] (0xc0008de000) (1) Data frame sent\nI0321 21:25:47.021814 807 log.go:172] (0xc000902b00) (0xc0008de000) Stream removed, broadcasting: 1\nI0321 21:25:47.021843 807 log.go:172] (0xc000902b00) Go away received\nI0321 21:25:47.022206 807 log.go:172] (0xc000902b00) (0xc0008de000) Stream removed, broadcasting: 1\nI0321 21:25:47.022230 807 log.go:172] (0xc000902b00) (0xc00064bc20) Stream removed, broadcasting: 3\nI0321 21:25:47.022242 807 log.go:172] (0xc000902b00) (0xc00064be00) Stream removed, broadcasting: 5\n" Mar 21 21:25:47.026: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 21:25:47.026: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 21:25:47.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 21:25:47.248: INFO: stderr: "I0321 21:25:47.153421 828 log.go:172] (0xc0007aeb00) (0xc0005e5a40) Create stream\nI0321 21:25:47.153473 828 log.go:172] (0xc0007aeb00) (0xc0005e5a40) Stream added, broadcasting: 1\nI0321 21:25:47.156004 828 log.go:172] (0xc0007aeb00) Reply frame received for 1\nI0321 21:25:47.156036 828 log.go:172] (0xc0007aeb00) (0xc0008620a0) Create stream\nI0321 21:25:47.156048 828 log.go:172] (0xc0007aeb00) (0xc0008620a0) Stream added, broadcasting: 3\nI0321 21:25:47.156808 828 log.go:172] (0xc0007aeb00) Reply frame received for 3\nI0321 21:25:47.156832 828 log.go:172] (0xc0007aeb00) (0xc0003a4000) Create stream\nI0321 21:25:47.156841 828 log.go:172] (0xc0007aeb00) (0xc0003a4000) Stream added, broadcasting: 5\nI0321 21:25:47.157708 828 log.go:172] (0xc0007aeb00) Reply frame received for 5\nI0321 21:25:47.216748 828 log.go:172] (0xc0007aeb00) Data frame received for 5\nI0321 21:25:47.216778 828 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0321 21:25:47.216807 828 log.go:172] (0xc0003a4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 21:25:47.241810 828 log.go:172] (0xc0007aeb00) Data frame received for 3\nI0321 21:25:47.241855 828 log.go:172] (0xc0008620a0) (3) Data frame handling\nI0321 21:25:47.241875 828 log.go:172] (0xc0008620a0) (3) Data frame sent\nI0321 21:25:47.241890 828 log.go:172] (0xc0007aeb00) Data frame received for 3\nI0321 21:25:47.241903 828 log.go:172] (0xc0008620a0) (3) Data frame handling\nI0321 21:25:47.242138 828 log.go:172] (0xc0007aeb00) Data frame received for 5\nI0321 21:25:47.242167 828 log.go:172] (0xc0003a4000) (5) Data frame handling\nI0321 21:25:47.243960 828 log.go:172] (0xc0007aeb00) Data frame received for 1\nI0321 21:25:47.243981 828 log.go:172] (0xc0005e5a40) (1) Data frame handling\nI0321 21:25:47.243995 828 log.go:172] (0xc0005e5a40) (1) Data frame sent\nI0321 21:25:47.244018 828 log.go:172] (0xc0007aeb00) (0xc0005e5a40) Stream removed, broadcasting: 1\nI0321 21:25:47.244035 828 log.go:172] (0xc0007aeb00) Go away received\nI0321 21:25:47.244495 828 log.go:172] (0xc0007aeb00) (0xc0005e5a40) Stream removed, broadcasting: 1\nI0321 21:25:47.244518 828 log.go:172] (0xc0007aeb00) (0xc0008620a0) Stream removed, broadcasting: 3\nI0321 21:25:47.244530 828 log.go:172] (0xc0007aeb00) (0xc0003a4000) Stream removed, broadcasting: 5\n" Mar 21 21:25:47.248: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 21:25:47.248: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 21:25:47.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 21:25:47.507: INFO: stderr: "I0321 21:25:47.395403 850 log.go:172] (0xc00096a0b0) (0xc000a00000) Create stream\nI0321 21:25:47.395490 850 log.go:172] (0xc00096a0b0) (0xc000a00000) Stream added, broadcasting: 1\nI0321 21:25:47.400919 850 log.go:172] (0xc00096a0b0) Reply frame received for 1\nI0321 21:25:47.401005 850 log.go:172] (0xc00096a0b0) (0xc000613cc0) Create stream\nI0321 21:25:47.401027 850 log.go:172] (0xc00096a0b0) (0xc000613cc0) Stream added, broadcasting: 3\nI0321 21:25:47.401966 850 log.go:172] (0xc00096a0b0) Reply frame received for 3\nI0321 21:25:47.401999 850 log.go:172] (0xc00096a0b0) (0xc000613d60) Create stream\nI0321 21:25:47.402010 850 log.go:172] (0xc00096a0b0) (0xc000613d60) Stream added, broadcasting: 5\nI0321 21:25:47.402892 850 log.go:172] (0xc00096a0b0) Reply frame received for 5\nI0321 21:25:47.459108 850 log.go:172] (0xc00096a0b0) Data frame received for 5\nI0321 21:25:47.459142 850 log.go:172] (0xc000613d60) (5) Data frame handling\nI0321 21:25:47.459161 850 log.go:172] (0xc000613d60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 21:25:47.499606 850 log.go:172] (0xc00096a0b0) Data frame received for 3\nI0321 21:25:47.499631 850 log.go:172] (0xc000613cc0) (3) Data frame handling\nI0321 21:25:47.499641 850 log.go:172] (0xc000613cc0) (3) Data frame sent\nI0321 21:25:47.500110 850 log.go:172] (0xc00096a0b0) Data frame received for 5\nI0321 21:25:47.500207 850 log.go:172] (0xc000613d60) (5) Data frame handling\nI0321 21:25:47.500236 850 log.go:172] (0xc00096a0b0) Data frame received for 3\nI0321 21:25:47.500246 850 log.go:172] (0xc000613cc0) (3) Data frame handling\nI0321 21:25:47.502429 850 log.go:172] (0xc00096a0b0) Data frame received for 1\nI0321 21:25:47.502447 850 log.go:172] (0xc000a00000) (1) Data frame handling\nI0321 21:25:47.502454 850 log.go:172] (0xc000a00000) (1) Data frame sent\nI0321 21:25:47.502462 850 log.go:172] (0xc00096a0b0) (0xc000a00000) Stream removed, broadcasting: 1\nI0321 21:25:47.502481 850 log.go:172] (0xc00096a0b0) Go away received\nI0321 21:25:47.502835 850 log.go:172] (0xc00096a0b0) (0xc000a00000) Stream removed, broadcasting: 1\nI0321 21:25:47.502854 850 log.go:172] (0xc00096a0b0) (0xc000613cc0) Stream removed, broadcasting: 3\nI0321 21:25:47.502865 850 log.go:172] (0xc00096a0b0) (0xc000613d60) Stream removed, broadcasting: 5\n" Mar 21 21:25:47.507: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 21:25:47.507: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 21:25:47.507: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 21:25:47.515: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 21 21:25:57.520: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 21:25:57.520: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 21 21:25:57.520: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 21 21:25:57.533: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:25:57.533: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:25:57.533: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:25:57.534: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:25:57.534: INFO: Mar 21 21:25:57.534: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:25:59.066: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:25:59.067: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:25:59.067: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:25:59.067: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:25:59.067: INFO: Mar 21 21:25:59.067: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:26:00.071: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:26:00.071: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:26:00.071: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:00.071: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:00.071: INFO: Mar 21 21:26:00.071: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:26:01.074: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:26:01.074: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:26:01.074: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:01.075: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:01.075: INFO: Mar 21 21:26:01.075: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:26:02.079: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:26:02.079: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:26:02.079: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:02.080: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:02.080: INFO: Mar 21 21:26:02.080: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:26:03.084: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:26:03.084: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:26:03.084: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:03.084: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:03.084: INFO: Mar 21 21:26:03.084: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:26:04.089: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:26:04.089: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:26:04.089: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:04.089: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:04.090: INFO: Mar 21 21:26:04.090: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:26:05.094: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:26:05.094: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:26:05.094: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:05.094: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:05.094: INFO: Mar 21 21:26:05.094: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:26:06.099: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:26:06.099: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:26:06.099: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:06.099: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:06.099: INFO: Mar 21 21:26:06.099: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 21:26:07.103: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 21:26:07.103: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:13 +0000 UTC }] Mar 21 21:26:07.103: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:07.103: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 21:25:35 +0000 UTC }] Mar 21 21:26:07.103: INFO: Mar 21 21:26:07.103: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8885 Mar 21 21:26:08.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:26:08.230: INFO: rc: 1 Mar 21 21:26:08.231: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 21 21:26:18.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:26:18.328: INFO: rc: 1 Mar 21 21:26:18.328: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:26:28.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:26:28.432: INFO: rc: 1 Mar 21 21:26:28.432: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:26:38.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:26:38.545: INFO: rc: 1 Mar 21 21:26:38.545: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:26:48.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:26:48.639: INFO: rc: 1 Mar 21 21:26:48.640: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:26:58.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:26:58.744: INFO: rc: 1 Mar 21 21:26:58.744: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:27:08.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:27:08.847: INFO: rc: 1 Mar 21 21:27:08.847: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:27:18.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:27:18.946: INFO: rc: 1 Mar 21 21:27:18.946: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:27:28.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:27:29.042: INFO: rc: 1 Mar 21 21:27:29.042: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:27:39.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:27:39.140: INFO: rc: 1 Mar 21 21:27:39.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:27:49.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:27:49.234: INFO: rc: 1 Mar 21 21:27:49.234: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:27:59.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:27:59.335: INFO: rc: 1 Mar 21 21:27:59.335: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:28:09.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:28:09.436: INFO: rc: 1 Mar 21 21:28:09.436: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:28:19.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:28:19.529: INFO: rc: 1 Mar 21 21:28:19.529: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:28:29.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:28:29.653: INFO: rc: 1 Mar 21 21:28:29.653: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:28:39.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:28:39.768: INFO: rc: 1 Mar 21 21:28:39.768: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:28:49.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:28:49.865: INFO: rc: 1 Mar 21 21:28:49.865: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:28:59.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:28:59.973: INFO: rc: 1 Mar 21 21:28:59.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:29:09.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:29:10.088: INFO: rc: 1 Mar 21 21:29:10.088: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:29:20.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:29:20.218: INFO: rc: 1 Mar 21 21:29:20.219: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:29:30.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:29:30.354: INFO: rc: 1 Mar 21 21:29:30.354: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:29:40.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:29:40.451: INFO: rc: 1 Mar 21 21:29:40.451: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:29:50.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:29:50.566: INFO: rc: 1 Mar 21 21:29:50.566: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:30:00.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:30:00.665: INFO: rc: 1 Mar 21 21:30:00.665: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:30:10.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:30:10.761: INFO: rc: 1 Mar 21 21:30:10.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:30:20.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:30:20.859: INFO: rc: 1 Mar 21 21:30:20.859: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:30:30.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:30:31.118: INFO: rc: 1 Mar 21 21:30:31.118: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:30:41.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:30:41.220: INFO: rc: 1 Mar 21 21:30:41.220: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:30:51.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:30:51.319: INFO: rc: 1 Mar 21 21:30:51.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:31:01.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:31:01.411: INFO: rc: 1 Mar 21 21:31:01.411: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 21:31:11.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8885 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:31:11.498: INFO: rc: 1 Mar 21 21:31:11.498: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Mar 21 21:31:11.498: INFO: Scaling statefulset ss to 0 Mar 21 21:31:11.506: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 21 21:31:11.508: INFO: Deleting all statefulset in ns statefulset-8885 Mar 21 21:31:11.511: INFO: Scaling statefulset ss to 0 Mar 21 21:31:11.518: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 21:31:11.520: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:31:11.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8885" for this suite. • [SLOW TEST:358.330 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":71,"skipped":1203,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:31:11.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:31:11.576: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 21 21:31:14.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6854 create -f -' Mar 21 21:31:17.717: INFO: stderr: "" Mar 21 21:31:17.717: INFO: stdout: "e2e-test-crd-publish-openapi-5604-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 21 21:31:17.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6854 delete e2e-test-crd-publish-openapi-5604-crds test-cr' Mar 21 21:31:17.839: INFO: stderr: "" Mar 21 21:31:17.839: INFO: stdout: "e2e-test-crd-publish-openapi-5604-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 21 21:31:17.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6854 apply -f -' Mar 21 21:31:18.082: INFO: stderr: "" Mar 21 21:31:18.082: INFO: stdout: "e2e-test-crd-publish-openapi-5604-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 21 21:31:18.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6854 delete e2e-test-crd-publish-openapi-5604-crds test-cr' Mar 21 21:31:18.187: INFO: stderr: "" Mar 21 21:31:18.187: INFO: stdout: "e2e-test-crd-publish-openapi-5604-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 21 21:31:18.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5604-crds' Mar 21 21:31:19.224: INFO: stderr: "" Mar 21 21:31:19.224: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5604-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:31:21.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6854" for this suite. • [SLOW TEST:9.561 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":72,"skipped":1203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:31:21.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 21 21:31:21.143: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 21:31:21.167: INFO: Waiting for terminating namespaces to be deleted... Mar 21 21:31:21.170: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 21 21:31:21.191: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:31:21.191: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:31:21.191: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:31:21.191: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 21:31:21.191: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 21 21:31:21.209: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:31:21.209: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:31:21.209: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:31:21.209: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3536e464-7352-4875-b296-f79e44dd7498 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3536e464-7352-4875-b296-f79e44dd7498 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3536e464-7352-4875-b296-f79e44dd7498 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:31:29.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3088" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.300 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":73,"skipped":1234,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:31:29.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:31:46.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8578" for this suite. • [SLOW TEST:17.133 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":74,"skipped":1244,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:31:46.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 21 21:31:46.590: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:31:53.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-277" for this suite. • [SLOW TEST:7.059 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":75,"skipped":1251,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:31:53.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-6637 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6637 STEP: Deleting pre-stop pod Mar 21 21:32:06.925: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:32:06.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6637" for this suite. • [SLOW TEST:13.363 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":76,"skipped":1267,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:32:06.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:32:06.997: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:32:07.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5910" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":77,"skipped":1272,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:32:07.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 21 21:32:12.449: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7702 pod-service-account-64a083ef-eed3-45f0-a610-d4ccea553492 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 21 21:32:12.714: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7702 pod-service-account-64a083ef-eed3-45f0-a610-d4ccea553492 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 21 21:32:12.917: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7702 pod-service-account-64a083ef-eed3-45f0-a610-d4ccea553492 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:32:13.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7702" for this suite. • [SLOW TEST:5.336 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":78,"skipped":1283,"failed":0} SS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:32:13.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:32:13.204: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-cc59be30-4740-4003-b0a5-e09efd2b3610" in namespace "security-context-test-189" to be "success or failure" Mar 21 21:32:13.208: INFO: Pod "alpine-nnp-false-cc59be30-4740-4003-b0a5-e09efd2b3610": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722633ms Mar 21 21:32:15.213: INFO: Pod "alpine-nnp-false-cc59be30-4740-4003-b0a5-e09efd2b3610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009644153s Mar 21 21:32:17.217: INFO: Pod "alpine-nnp-false-cc59be30-4740-4003-b0a5-e09efd2b3610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013152153s Mar 21 21:32:17.217: INFO: Pod "alpine-nnp-false-cc59be30-4740-4003-b0a5-e09efd2b3610" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:32:17.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-189" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1285,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:32:17.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-92ea092b-8eca-46fe-8cb5-4e9d9208d2cb Mar 21 21:32:17.334: INFO: Pod name my-hostname-basic-92ea092b-8eca-46fe-8cb5-4e9d9208d2cb: Found 0 pods out of 1 Mar 21 21:32:22.359: INFO: Pod name my-hostname-basic-92ea092b-8eca-46fe-8cb5-4e9d9208d2cb: Found 1 pods out of 1 Mar 21 21:32:22.359: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-92ea092b-8eca-46fe-8cb5-4e9d9208d2cb" are running Mar 21 21:32:22.369: INFO: Pod "my-hostname-basic-92ea092b-8eca-46fe-8cb5-4e9d9208d2cb-6cbxh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 21:32:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 21:32:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 21:32:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 21:32:17 +0000 UTC Reason: Message:}]) Mar 21 21:32:22.369: INFO: Trying to dial the pod Mar 21 21:32:27.381: INFO: Controller my-hostname-basic-92ea092b-8eca-46fe-8cb5-4e9d9208d2cb: Got expected result from replica 1 [my-hostname-basic-92ea092b-8eca-46fe-8cb5-4e9d9208d2cb-6cbxh]: "my-hostname-basic-92ea092b-8eca-46fe-8cb5-4e9d9208d2cb-6cbxh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:32:27.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6447" for this suite. • [SLOW TEST:10.158 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":80,"skipped":1303,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:32:27.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:32:27.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-651d50c2-50b5-48b5-9bcc-54497df58096" in namespace "projected-1346" to be "success or failure" Mar 21 21:32:27.448: INFO: Pod "downwardapi-volume-651d50c2-50b5-48b5-9bcc-54497df58096": Phase="Pending", Reason="", readiness=false. Elapsed: 2.613684ms Mar 21 21:32:29.452: INFO: Pod "downwardapi-volume-651d50c2-50b5-48b5-9bcc-54497df58096": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006359754s Mar 21 21:32:31.456: INFO: Pod "downwardapi-volume-651d50c2-50b5-48b5-9bcc-54497df58096": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010908176s STEP: Saw pod success Mar 21 21:32:31.456: INFO: Pod "downwardapi-volume-651d50c2-50b5-48b5-9bcc-54497df58096" satisfied condition "success or failure" Mar 21 21:32:31.459: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-651d50c2-50b5-48b5-9bcc-54497df58096 container client-container: STEP: delete the pod Mar 21 21:32:31.478: INFO: Waiting for pod downwardapi-volume-651d50c2-50b5-48b5-9bcc-54497df58096 to disappear Mar 21 21:32:31.483: INFO: Pod downwardapi-volume-651d50c2-50b5-48b5-9bcc-54497df58096 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:32:31.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1346" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1304,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:32:31.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 21 21:32:32.100: INFO: Pod name wrapped-volume-race-f27e9458-f4b7-4b48-8c1d-cfb4496b5fed: Found 0 pods out of 5 Mar 21 21:32:37.110: INFO: Pod name wrapped-volume-race-f27e9458-f4b7-4b48-8c1d-cfb4496b5fed: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f27e9458-f4b7-4b48-8c1d-cfb4496b5fed in namespace emptydir-wrapper-3664, will wait for the garbage collector to delete the pods Mar 21 21:32:51.214: INFO: Deleting ReplicationController wrapped-volume-race-f27e9458-f4b7-4b48-8c1d-cfb4496b5fed took: 7.474408ms Mar 21 21:32:51.514: INFO: Terminating ReplicationController wrapped-volume-race-f27e9458-f4b7-4b48-8c1d-cfb4496b5fed pods took: 300.265894ms STEP: Creating RC which spawns configmap-volume pods Mar 21 21:33:00.655: INFO: Pod name wrapped-volume-race-03276764-191a-405d-98fe-0b13b24b3bc0: Found 0 pods out of 5 Mar 21 21:33:05.663: INFO: Pod name wrapped-volume-race-03276764-191a-405d-98fe-0b13b24b3bc0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-03276764-191a-405d-98fe-0b13b24b3bc0 in namespace emptydir-wrapper-3664, will wait for the garbage collector to delete the pods Mar 21 21:33:19.746: INFO: Deleting ReplicationController wrapped-volume-race-03276764-191a-405d-98fe-0b13b24b3bc0 took: 7.999884ms Mar 21 21:33:20.146: INFO: Terminating ReplicationController wrapped-volume-race-03276764-191a-405d-98fe-0b13b24b3bc0 pods took: 400.251706ms STEP: Creating RC which spawns configmap-volume pods Mar 21 21:33:29.691: INFO: Pod name wrapped-volume-race-23125429-bd01-44c0-83da-bf2e55406976: Found 0 pods out of 5 Mar 21 21:33:34.698: INFO: Pod name wrapped-volume-race-23125429-bd01-44c0-83da-bf2e55406976: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-23125429-bd01-44c0-83da-bf2e55406976 in namespace emptydir-wrapper-3664, will wait for the garbage collector to delete the pods Mar 21 21:33:48.817: INFO: Deleting ReplicationController wrapped-volume-race-23125429-bd01-44c0-83da-bf2e55406976 took: 7.291931ms Mar 21 21:33:49.217: INFO: Terminating ReplicationController wrapped-volume-race-23125429-bd01-44c0-83da-bf2e55406976 pods took: 400.25656ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:34:00.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3664" for this suite. • [SLOW TEST:89.540 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":82,"skipped":1305,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:34:01.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:34:01.123: INFO: Create a RollingUpdate DaemonSet Mar 21 21:34:01.145: INFO: Check that daemon pods launch on every node of the cluster Mar 21 21:34:01.149: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:34:01.154: INFO: Number of nodes with available pods: 0 Mar 21 21:34:01.154: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:34:02.159: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:34:02.162: INFO: Number of nodes with available pods: 0 Mar 21 21:34:02.162: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:34:03.160: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:34:03.164: INFO: Number of nodes with available pods: 0 Mar 21 21:34:03.164: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:34:04.158: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:34:04.162: INFO: Number of nodes with available pods: 0 Mar 21 21:34:04.162: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:34:05.159: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:34:05.163: INFO: Number of nodes with available pods: 2 Mar 21 21:34:05.163: INFO: Number of running nodes: 2, number of available pods: 2 Mar 21 21:34:05.163: INFO: Update the DaemonSet to trigger a rollout Mar 21 21:34:05.169: INFO: Updating DaemonSet daemon-set Mar 21 21:34:20.184: INFO: Roll back the DaemonSet before rollout is complete Mar 21 21:34:20.191: INFO: Updating DaemonSet daemon-set Mar 21 21:34:20.191: INFO: Make sure DaemonSet rollback is complete Mar 21 21:34:20.227: INFO: Wrong image for pod: daemon-set-8h8hs. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 21 21:34:20.227: INFO: Pod daemon-set-8h8hs is not available Mar 21 21:34:20.245: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:34:21.249: INFO: Wrong image for pod: daemon-set-8h8hs. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 21 21:34:21.249: INFO: Pod daemon-set-8h8hs is not available Mar 21 21:34:21.253: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 21:34:22.250: INFO: Pod daemon-set-5jhk5 is not available Mar 21 21:34:22.254: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3231, will wait for the garbage collector to delete the pods Mar 21 21:34:22.318: INFO: Deleting DaemonSet.extensions daemon-set took: 6.815996ms Mar 21 21:34:22.619: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.259349ms Mar 21 21:34:30.838: INFO: Number of nodes with available pods: 0 Mar 21 21:34:30.838: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 21:34:30.841: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3231/daemonsets","resourceVersion":"1649703"},"items":null} Mar 21 21:34:30.844: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3231/pods","resourceVersion":"1649703"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:34:30.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3231" for this suite. • [SLOW TEST:29.827 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":83,"skipped":1306,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:34:30.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:34:30.982: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 21 21:34:35.988: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 21 21:34:35.989: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 21 21:34:37.992: INFO: Creating deployment "test-rollover-deployment" Mar 21 21:34:38.043: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 21 21:34:40.062: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 21 21:34:40.068: INFO: Ensure that both replica sets have 1 created replica Mar 21 21:34:40.074: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 21 21:34:40.079: INFO: Updating deployment test-rollover-deployment Mar 21 21:34:40.079: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 21 21:34:42.087: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 21 21:34:42.093: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 21 21:34:42.098: INFO: all replica sets need to contain the pod-template-hash label Mar 21 21:34:42.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423280, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:34:44.119: INFO: all replica sets need to contain the pod-template-hash label Mar 21 21:34:44.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423282, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:34:46.106: INFO: all replica sets need to contain the pod-template-hash label Mar 21 21:34:46.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423282, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:34:48.105: INFO: all replica sets need to contain the pod-template-hash label Mar 21 21:34:48.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423282, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:34:50.107: INFO: all replica sets need to contain the pod-template-hash label Mar 21 21:34:50.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423282, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:34:52.107: INFO: all replica sets need to contain the pod-template-hash label Mar 21 21:34:52.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423282, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423278, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:34:54.110: INFO: Mar 21 21:34:54.110: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 21 21:34:54.116: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9196 /apis/apps/v1/namespaces/deployment-9196/deployments/test-rollover-deployment 7f2ecdbf-2a28-41f6-858e-03a36b7b1385 1649869 2 2020-03-21 21:34:37 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d7f7f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-21 21:34:38 +0000 UTC,LastTransitionTime:2020-03-21 21:34:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-21 21:34:53 +0000 UTC,LastTransitionTime:2020-03-21 21:34:38 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 21 21:34:54.119: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9196 /apis/apps/v1/namespaces/deployment-9196/replicasets/test-rollover-deployment-574d6dfbff bbcf64f4-898e-4f8d-9da2-541940a5d630 1649858 2 2020-03-21 21:34:40 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 7f2ecdbf-2a28-41f6-858e-03a36b7b1385 0xc002d7fc77 0xc002d7fc78}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d7fce8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 21 21:34:54.119: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 21 21:34:54.119: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9196 /apis/apps/v1/namespaces/deployment-9196/replicasets/test-rollover-controller e3f680f0-ba08-4c0d-a533-e80598e8964e 1649868 2 2020-03-21 21:34:30 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 7f2ecdbf-2a28-41f6-858e-03a36b7b1385 0xc002d7fb8f 0xc002d7fba0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d7fc08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 21:34:54.119: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9196 /apis/apps/v1/namespaces/deployment-9196/replicasets/test-rollover-deployment-f6c94f66c 2329fbde-4299-468b-b0d3-bfef6e0befb1 1649810 2 2020-03-21 21:34:38 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 7f2ecdbf-2a28-41f6-858e-03a36b7b1385 0xc002d7fd50 0xc002d7fd51}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d7fdc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 21:34:54.122: INFO: Pod "test-rollover-deployment-574d6dfbff-fh42j" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-fh42j test-rollover-deployment-574d6dfbff- deployment-9196 /api/v1/namespaces/deployment-9196/pods/test-rollover-deployment-574d6dfbff-fh42j 83c0dd90-81cb-491e-9606-294aa63d4b38 1649826 0 2020-03-21 21:34:40 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff bbcf64f4-898e-4f8d-9da2-541940a5d630 0xc002956307 0xc002956308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mwf2w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mwf2w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mwf2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 21:34:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 21:34:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 21:34:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 21:34:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.152,StartTime:2020-03-21 21:34:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 21:34:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c2439d7634d94257b470f60d110869b642fe08f7bc761dc78e405a9fa36b91cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:34:54.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9196" for this suite. • [SLOW TEST:23.271 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":84,"skipped":1319,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:34:54.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:34:54.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7fa7dc40-1d87-4cc4-ba2f-8b227b26fcce" in namespace "downward-api-5088" to be "success or failure" Mar 21 21:34:54.250: INFO: Pod "downwardapi-volume-7fa7dc40-1d87-4cc4-ba2f-8b227b26fcce": Phase="Pending", Reason="", readiness=false. Elapsed: 14.22759ms Mar 21 21:34:56.253: INFO: Pod "downwardapi-volume-7fa7dc40-1d87-4cc4-ba2f-8b227b26fcce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017393031s Mar 21 21:34:58.258: INFO: Pod "downwardapi-volume-7fa7dc40-1d87-4cc4-ba2f-8b227b26fcce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021840159s STEP: Saw pod success Mar 21 21:34:58.258: INFO: Pod "downwardapi-volume-7fa7dc40-1d87-4cc4-ba2f-8b227b26fcce" satisfied condition "success or failure" Mar 21 21:34:58.261: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7fa7dc40-1d87-4cc4-ba2f-8b227b26fcce container client-container: STEP: delete the pod Mar 21 21:34:58.296: INFO: Waiting for pod downwardapi-volume-7fa7dc40-1d87-4cc4-ba2f-8b227b26fcce to disappear Mar 21 21:34:58.315: INFO: Pod downwardapi-volume-7fa7dc40-1d87-4cc4-ba2f-8b227b26fcce no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:34:58.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5088" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1340,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:34:58.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:34:58.398: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:34:59.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1653" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":86,"skipped":1347,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:34:59.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:35:13.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-73" for this suite. • [SLOW TEST:14.082 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":87,"skipped":1360,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:35:13.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 21 21:35:13.847: INFO: Waiting up to 5m0s for pod "pod-1cb0665f-cf5b-4a09-aec8-cce6c8777ca6" in namespace "emptydir-2265" to be "success or failure" Mar 21 21:35:13.876: INFO: Pod "pod-1cb0665f-cf5b-4a09-aec8-cce6c8777ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.990429ms Mar 21 21:35:15.880: INFO: Pod "pod-1cb0665f-cf5b-4a09-aec8-cce6c8777ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032674131s Mar 21 21:35:17.885: INFO: Pod "pod-1cb0665f-cf5b-4a09-aec8-cce6c8777ca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037120162s STEP: Saw pod success Mar 21 21:35:17.885: INFO: Pod "pod-1cb0665f-cf5b-4a09-aec8-cce6c8777ca6" satisfied condition "success or failure" Mar 21 21:35:17.887: INFO: Trying to get logs from node jerma-worker2 pod pod-1cb0665f-cf5b-4a09-aec8-cce6c8777ca6 container test-container: STEP: delete the pod Mar 21 21:35:17.907: INFO: Waiting for pod pod-1cb0665f-cf5b-4a09-aec8-cce6c8777ca6 to disappear Mar 21 21:35:17.911: INFO: Pod pod-1cb0665f-cf5b-4a09-aec8-cce6c8777ca6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:35:17.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2265" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1369,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:35:17.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 21 21:35:17.997: INFO: Waiting up to 5m0s for pod "pod-207ea9ff-530d-4da7-95d3-a3b0c8ee1d02" in namespace "emptydir-1011" to be "success or failure" Mar 21 21:35:18.014: INFO: Pod "pod-207ea9ff-530d-4da7-95d3-a3b0c8ee1d02": Phase="Pending", Reason="", readiness=false. Elapsed: 16.776748ms Mar 21 21:35:20.062: INFO: Pod "pod-207ea9ff-530d-4da7-95d3-a3b0c8ee1d02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06445712s Mar 21 21:35:22.066: INFO: Pod "pod-207ea9ff-530d-4da7-95d3-a3b0c8ee1d02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068464028s STEP: Saw pod success Mar 21 21:35:22.066: INFO: Pod "pod-207ea9ff-530d-4da7-95d3-a3b0c8ee1d02" satisfied condition "success or failure" Mar 21 21:35:22.070: INFO: Trying to get logs from node jerma-worker pod pod-207ea9ff-530d-4da7-95d3-a3b0c8ee1d02 container test-container: STEP: delete the pod Mar 21 21:35:22.104: INFO: Waiting for pod pod-207ea9ff-530d-4da7-95d3-a3b0c8ee1d02 to disappear Mar 21 21:35:22.108: INFO: Pod pod-207ea9ff-530d-4da7-95d3-a3b0c8ee1d02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:35:22.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1011" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1374,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:35:22.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-83004bd6-f0ac-4b1c-bcc0-a6a2899941db STEP: Creating a pod to test consume configMaps Mar 21 21:35:22.188: INFO: Waiting up to 5m0s for pod "pod-configmaps-1abc90ea-3ef3-41ce-a881-0e272efea93e" in namespace "configmap-5215" to be "success or failure" Mar 21 21:35:22.230: INFO: Pod "pod-configmaps-1abc90ea-3ef3-41ce-a881-0e272efea93e": Phase="Pending", Reason="", readiness=false. Elapsed: 41.644755ms Mar 21 21:35:24.242: INFO: Pod "pod-configmaps-1abc90ea-3ef3-41ce-a881-0e272efea93e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053241699s Mar 21 21:35:26.246: INFO: Pod "pod-configmaps-1abc90ea-3ef3-41ce-a881-0e272efea93e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057194682s STEP: Saw pod success Mar 21 21:35:26.246: INFO: Pod "pod-configmaps-1abc90ea-3ef3-41ce-a881-0e272efea93e" satisfied condition "success or failure" Mar 21 21:35:26.249: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1abc90ea-3ef3-41ce-a881-0e272efea93e container configmap-volume-test: STEP: delete the pod Mar 21 21:35:26.265: INFO: Waiting for pod pod-configmaps-1abc90ea-3ef3-41ce-a881-0e272efea93e to disappear Mar 21 21:35:26.270: INFO: Pod pod-configmaps-1abc90ea-3ef3-41ce-a881-0e272efea93e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:35:26.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5215" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:35:26.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:35:26.882: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:35:28.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423326, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423326, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423326, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423326, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:35:31.944: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 21 21:35:36.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4766 to-be-attached-pod -i -c=container1' Mar 21 21:35:36.126: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:35:36.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4766" for this suite. STEP: Destroying namespace "webhook-4766-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.088 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":91,"skipped":1418,"failed":0} [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:35:36.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 21 21:35:36.490: INFO: Waiting up to 5m0s for pod "var-expansion-afd63b10-2fcc-4b0b-9aba-3f72bf0ec347" in namespace "var-expansion-1971" to be "success or failure" Mar 21 21:35:36.499: INFO: Pod "var-expansion-afd63b10-2fcc-4b0b-9aba-3f72bf0ec347": Phase="Pending", Reason="", readiness=false. Elapsed: 9.188455ms Mar 21 21:35:38.530: INFO: Pod "var-expansion-afd63b10-2fcc-4b0b-9aba-3f72bf0ec347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0396283s Mar 21 21:35:40.534: INFO: Pod "var-expansion-afd63b10-2fcc-4b0b-9aba-3f72bf0ec347": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044176621s STEP: Saw pod success Mar 21 21:35:40.534: INFO: Pod "var-expansion-afd63b10-2fcc-4b0b-9aba-3f72bf0ec347" satisfied condition "success or failure" Mar 21 21:35:40.599: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-afd63b10-2fcc-4b0b-9aba-3f72bf0ec347 container dapi-container: STEP: delete the pod Mar 21 21:35:40.686: INFO: Waiting for pod var-expansion-afd63b10-2fcc-4b0b-9aba-3f72bf0ec347 to disappear Mar 21 21:35:40.703: INFO: Pod var-expansion-afd63b10-2fcc-4b0b-9aba-3f72bf0ec347 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:35:40.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1971" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:35:40.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:35:40.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 21 21:35:41.432: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-21T21:35:41Z generation:1 name:name1 resourceVersion:1650366 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2247e9b8-f18c-4f99-9e87-0c618fd1ee18] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 21 21:35:51.438: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-21T21:35:51Z generation:1 name:name2 resourceVersion:1650410 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c6568da-07e8-4309-9fa6-5ad9e5ea34ed] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 21 21:36:01.446: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-21T21:35:41Z generation:2 name:name1 resourceVersion:1650440 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2247e9b8-f18c-4f99-9e87-0c618fd1ee18] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 21 21:36:11.452: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-21T21:35:51Z generation:2 name:name2 resourceVersion:1650472 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c6568da-07e8-4309-9fa6-5ad9e5ea34ed] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 21 21:36:21.460: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-21T21:35:41Z generation:2 name:name1 resourceVersion:1650502 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2247e9b8-f18c-4f99-9e87-0c618fd1ee18] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 21 21:36:31.467: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-21T21:35:51Z generation:2 name:name2 resourceVersion:1650530 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c6568da-07e8-4309-9fa6-5ad9e5ea34ed] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:36:41.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1122" for this suite. • [SLOW TEST:61.278 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":93,"skipped":1453,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:36:41.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:36:42.457: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:36:44.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423402, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423402, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423402, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423402, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:36:47.503: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:36:47.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-966-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:36:48.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8039" for this suite. STEP: Destroying namespace "webhook-8039-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.705 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":94,"skipped":1468,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:36:48.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 21 21:36:56.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 21:36:56.811: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 21:36:58.811: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 21:36:58.816: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 21:37:00.811: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 21:37:00.815: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:37:00.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3639" for this suite. • [SLOW TEST:12.130 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1468,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:37:00.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:37:00.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-629f5a29-76fa-41ba-b8cf-3a82e4437be2" in namespace "projected-3312" to be "success or failure" Mar 21 21:37:00.930: INFO: Pod "downwardapi-volume-629f5a29-76fa-41ba-b8cf-3a82e4437be2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.764694ms Mar 21 21:37:02.934: INFO: Pod "downwardapi-volume-629f5a29-76fa-41ba-b8cf-3a82e4437be2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030670338s Mar 21 21:37:04.939: INFO: Pod "downwardapi-volume-629f5a29-76fa-41ba-b8cf-3a82e4437be2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035227971s STEP: Saw pod success Mar 21 21:37:04.939: INFO: Pod "downwardapi-volume-629f5a29-76fa-41ba-b8cf-3a82e4437be2" satisfied condition "success or failure" Mar 21 21:37:04.942: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-629f5a29-76fa-41ba-b8cf-3a82e4437be2 container client-container: STEP: delete the pod Mar 21 21:37:04.981: INFO: Waiting for pod downwardapi-volume-629f5a29-76fa-41ba-b8cf-3a82e4437be2 to disappear Mar 21 21:37:04.985: INFO: Pod downwardapi-volume-629f5a29-76fa-41ba-b8cf-3a82e4437be2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:37:04.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3312" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1470,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:37:05.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:37:21.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8616" for this suite. • [SLOW TEST:16.224 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":97,"skipped":1471,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:37:21.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1178 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-1178 Mar 21 21:37:21.366: INFO: Found 0 stateful pods, waiting for 1 Mar 21 21:37:31.371: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 21 21:37:31.402: INFO: Deleting all statefulset in ns statefulset-1178 Mar 21 21:37:31.411: INFO: Scaling statefulset ss to 0 Mar 21 21:37:51.457: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 21:37:51.460: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:37:51.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1178" for this suite. • [SLOW TEST:30.245 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":98,"skipped":1476,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:37:51.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-a45c2bc1-cb59-4b9e-aa53-111b93542853 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:37:55.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-930" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1484,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:37:55.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 21 21:37:55.696: INFO: Waiting up to 5m0s for pod "var-expansion-bea46891-3705-4cc0-9a2e-f9fee3047be4" in namespace "var-expansion-4763" to be "success or failure" Mar 21 21:37:55.700: INFO: Pod "var-expansion-bea46891-3705-4cc0-9a2e-f9fee3047be4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128248ms Mar 21 21:37:57.723: INFO: Pod "var-expansion-bea46891-3705-4cc0-9a2e-f9fee3047be4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026288655s Mar 21 21:37:59.727: INFO: Pod "var-expansion-bea46891-3705-4cc0-9a2e-f9fee3047be4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030569625s STEP: Saw pod success Mar 21 21:37:59.727: INFO: Pod "var-expansion-bea46891-3705-4cc0-9a2e-f9fee3047be4" satisfied condition "success or failure" Mar 21 21:37:59.730: INFO: Trying to get logs from node jerma-worker pod var-expansion-bea46891-3705-4cc0-9a2e-f9fee3047be4 container dapi-container: STEP: delete the pod Mar 21 21:37:59.759: INFO: Waiting for pod var-expansion-bea46891-3705-4cc0-9a2e-f9fee3047be4 to disappear Mar 21 21:37:59.770: INFO: Pod var-expansion-bea46891-3705-4cc0-9a2e-f9fee3047be4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:37:59.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4763" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1511,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:37:59.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 21 21:37:59.826: INFO: >>> kubeConfig: /root/.kube/config Mar 21 21:38:02.772: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:38:13.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1747" for this suite. • [SLOW TEST:13.599 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":101,"skipped":1540,"failed":0} S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:38:13.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8732.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8732.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8732.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8732.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8732.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8732.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 21:38:19.488: INFO: DNS probes using dns-8732/dns-test-f40bde4a-1819-4bf6-9df7-bff871850ebf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:38:19.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8732" for this suite. • [SLOW TEST:6.186 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":102,"skipped":1541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:38:19.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-55a8c738-d846-457f-b628-9b2718cd94b7 STEP: Creating a pod to test consume secrets Mar 21 21:38:19.893: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-31a7752e-05af-41b3-bec2-1d64f1d4629f" in namespace "projected-7550" to be "success or failure" Mar 21 21:38:19.914: INFO: Pod "pod-projected-secrets-31a7752e-05af-41b3-bec2-1d64f1d4629f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.088989ms Mar 21 21:38:21.918: INFO: Pod "pod-projected-secrets-31a7752e-05af-41b3-bec2-1d64f1d4629f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024842161s Mar 21 21:38:23.922: INFO: Pod "pod-projected-secrets-31a7752e-05af-41b3-bec2-1d64f1d4629f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028915093s STEP: Saw pod success Mar 21 21:38:23.922: INFO: Pod "pod-projected-secrets-31a7752e-05af-41b3-bec2-1d64f1d4629f" satisfied condition "success or failure" Mar 21 21:38:23.925: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-31a7752e-05af-41b3-bec2-1d64f1d4629f container projected-secret-volume-test: STEP: delete the pod Mar 21 21:38:23.966: INFO: Waiting for pod pod-projected-secrets-31a7752e-05af-41b3-bec2-1d64f1d4629f to disappear Mar 21 21:38:23.980: INFO: Pod pod-projected-secrets-31a7752e-05af-41b3-bec2-1d64f1d4629f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:38:23.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7550" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1594,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:38:24.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1518 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 21:38:24.050: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 21 21:38:46.203: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.211:8080/dial?request=hostname&protocol=http&host=10.244.1.162&port=8080&tries=1'] Namespace:pod-network-test-1518 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 21:38:46.203: INFO: >>> kubeConfig: /root/.kube/config I0321 21:38:46.243921 6 log.go:172] (0xc001c48580) (0xc00146e820) Create stream I0321 21:38:46.243958 6 log.go:172] (0xc001c48580) (0xc00146e820) Stream added, broadcasting: 1 I0321 21:38:46.246022 6 log.go:172] (0xc001c48580) Reply frame received for 1 I0321 21:38:46.246070 6 log.go:172] (0xc001c48580) (0xc000f7d180) Create stream I0321 21:38:46.246084 6 log.go:172] (0xc001c48580) (0xc000f7d180) Stream added, broadcasting: 3 I0321 21:38:46.247213 6 log.go:172] (0xc001c48580) Reply frame received for 3 I0321 21:38:46.247271 6 log.go:172] (0xc001c48580) (0xc00146e960) Create stream I0321 21:38:46.247298 6 log.go:172] (0xc001c48580) (0xc00146e960) Stream added, broadcasting: 5 I0321 21:38:46.248289 6 log.go:172] (0xc001c48580) Reply frame received for 5 I0321 21:38:46.338177 6 log.go:172] (0xc001c48580) Data frame received for 3 I0321 21:38:46.338226 6 log.go:172] (0xc000f7d180) (3) Data frame handling I0321 21:38:46.338273 6 log.go:172] (0xc000f7d180) (3) Data frame sent I0321 21:38:46.339077 6 log.go:172] (0xc001c48580) Data frame received for 5 I0321 21:38:46.339096 6 log.go:172] (0xc00146e960) (5) Data frame handling I0321 21:38:46.339122 6 log.go:172] (0xc001c48580) Data frame received for 3 I0321 21:38:46.339132 6 log.go:172] (0xc000f7d180) (3) Data frame handling I0321 21:38:46.340820 6 log.go:172] (0xc001c48580) Data frame received for 1 I0321 21:38:46.340838 6 log.go:172] (0xc00146e820) (1) Data frame handling I0321 21:38:46.340850 6 log.go:172] (0xc00146e820) (1) Data frame sent I0321 21:38:46.340868 6 log.go:172] (0xc001c48580) (0xc00146e820) Stream removed, broadcasting: 1 I0321 21:38:46.340947 6 log.go:172] (0xc001c48580) Go away received I0321 21:38:46.341214 6 log.go:172] (0xc001c48580) (0xc00146e820) Stream removed, broadcasting: 1 I0321 21:38:46.341234 6 log.go:172] (0xc001c48580) (0xc000f7d180) Stream removed, broadcasting: 3 I0321 21:38:46.341248 6 log.go:172] (0xc001c48580) (0xc00146e960) Stream removed, broadcasting: 5 Mar 21 21:38:46.341: INFO: Waiting for responses: map[] Mar 21 21:38:46.344: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.211:8080/dial?request=hostname&protocol=http&host=10.244.2.210&port=8080&tries=1'] Namespace:pod-network-test-1518 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 21:38:46.344: INFO: >>> kubeConfig: /root/.kube/config I0321 21:38:46.380831 6 log.go:172] (0xc001fdc630) (0xc000f7db80) Create stream I0321 21:38:46.380853 6 log.go:172] (0xc001fdc630) (0xc000f7db80) Stream added, broadcasting: 1 I0321 21:38:46.383251 6 log.go:172] (0xc001fdc630) Reply frame received for 1 I0321 21:38:46.383315 6 log.go:172] (0xc001fdc630) (0xc000d16000) Create stream I0321 21:38:46.383344 6 log.go:172] (0xc001fdc630) (0xc000d16000) Stream added, broadcasting: 3 I0321 21:38:46.384186 6 log.go:172] (0xc001fdc630) Reply frame received for 3 I0321 21:38:46.384225 6 log.go:172] (0xc001fdc630) (0xc0013ce000) Create stream I0321 21:38:46.384241 6 log.go:172] (0xc001fdc630) (0xc0013ce000) Stream added, broadcasting: 5 I0321 21:38:46.385462 6 log.go:172] (0xc001fdc630) Reply frame received for 5 I0321 21:38:46.442331 6 log.go:172] (0xc001fdc630) Data frame received for 3 I0321 21:38:46.442360 6 log.go:172] (0xc000d16000) (3) Data frame handling I0321 21:38:46.442374 6 log.go:172] (0xc000d16000) (3) Data frame sent I0321 21:38:46.443143 6 log.go:172] (0xc001fdc630) Data frame received for 3 I0321 21:38:46.443179 6 log.go:172] (0xc000d16000) (3) Data frame handling I0321 21:38:46.443434 6 log.go:172] (0xc001fdc630) Data frame received for 5 I0321 21:38:46.443483 6 log.go:172] (0xc0013ce000) (5) Data frame handling I0321 21:38:46.445534 6 log.go:172] (0xc001fdc630) Data frame received for 1 I0321 21:38:46.445567 6 log.go:172] (0xc000f7db80) (1) Data frame handling I0321 21:38:46.445587 6 log.go:172] (0xc000f7db80) (1) Data frame sent I0321 21:38:46.447936 6 log.go:172] (0xc001fdc630) (0xc000f7db80) Stream removed, broadcasting: 1 I0321 21:38:46.448072 6 log.go:172] (0xc001fdc630) (0xc000f7db80) Stream removed, broadcasting: 1 I0321 21:38:46.448103 6 log.go:172] (0xc001fdc630) (0xc000d16000) Stream removed, broadcasting: 3 I0321 21:38:46.448395 6 log.go:172] (0xc001fdc630) (0xc0013ce000) Stream removed, broadcasting: 5 Mar 21 21:38:46.448: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:38:46.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1518" for this suite. • [SLOW TEST:22.449 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1598,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:38:46.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0321 21:39:17.045925 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 21:39:17.045: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:39:17.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1768" for this suite. • [SLOW TEST:30.594 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":105,"skipped":1613,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:39:17.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:39:18.030: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:39:20.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423558, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423558, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423558, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423558, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:39:23.081: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:39:23.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9968" for this suite. STEP: Destroying namespace "webhook-9968-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.283 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":106,"skipped":1618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:39:23.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:39:23.396: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 21 21:39:25.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3559 create -f -' Mar 21 21:39:28.174: INFO: stderr: "" Mar 21 21:39:28.174: INFO: stdout: "e2e-test-crd-publish-openapi-2454-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 21 21:39:28.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3559 delete e2e-test-crd-publish-openapi-2454-crds test-cr' Mar 21 21:39:28.297: INFO: stderr: "" Mar 21 21:39:28.297: INFO: stdout: "e2e-test-crd-publish-openapi-2454-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 21 21:39:28.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3559 apply -f -' Mar 21 21:39:28.576: INFO: stderr: "" Mar 21 21:39:28.576: INFO: stdout: "e2e-test-crd-publish-openapi-2454-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 21 21:39:28.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3559 delete e2e-test-crd-publish-openapi-2454-crds test-cr' Mar 21 21:39:28.686: INFO: stderr: "" Mar 21 21:39:28.686: INFO: stdout: "e2e-test-crd-publish-openapi-2454-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 21 21:39:28.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2454-crds' Mar 21 21:39:28.907: INFO: stderr: "" Mar 21 21:39:28.907: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2454-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:39:30.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3559" for this suite. • [SLOW TEST:7.465 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":107,"skipped":1650,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:39:30.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-tvnl STEP: Creating a pod to test atomic-volume-subpath Mar 21 21:39:30.864: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tvnl" in namespace "subpath-1537" to be "success or failure" Mar 21 21:39:30.868: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167224ms Mar 21 21:39:32.872: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007909411s Mar 21 21:39:34.876: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 4.011827764s Mar 21 21:39:36.880: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 6.015683876s Mar 21 21:39:38.884: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 8.019674031s Mar 21 21:39:40.888: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 10.023965567s Mar 21 21:39:42.892: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 12.02818124s Mar 21 21:39:44.897: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 14.032502555s Mar 21 21:39:46.900: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 16.036155213s Mar 21 21:39:48.908: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 18.043909602s Mar 21 21:39:50.914: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 20.049660812s Mar 21 21:39:52.918: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Running", Reason="", readiness=true. Elapsed: 22.053709873s Mar 21 21:39:54.922: INFO: Pod "pod-subpath-test-secret-tvnl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05772053s STEP: Saw pod success Mar 21 21:39:54.922: INFO: Pod "pod-subpath-test-secret-tvnl" satisfied condition "success or failure" Mar 21 21:39:54.924: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-tvnl container test-container-subpath-secret-tvnl: STEP: delete the pod Mar 21 21:39:55.064: INFO: Waiting for pod pod-subpath-test-secret-tvnl to disappear Mar 21 21:39:55.082: INFO: Pod pod-subpath-test-secret-tvnl no longer exists STEP: Deleting pod pod-subpath-test-secret-tvnl Mar 21 21:39:55.082: INFO: Deleting pod "pod-subpath-test-secret-tvnl" in namespace "subpath-1537" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:39:55.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1537" for this suite. • [SLOW TEST:24.291 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":108,"skipped":1658,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:39:55.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 21 21:39:58.221: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:39:58.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9105" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:39:58.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 21 21:39:58.333: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:40:05.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3114" for this suite. • [SLOW TEST:7.596 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":110,"skipped":1721,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:40:05.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-c3748cf6-549c-43bd-a603-5a6983d1edc8 STEP: Creating a pod to test consume configMaps Mar 21 21:40:05.958: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cfeb1400-aa30-443e-8227-e754d528a7bd" in namespace "projected-2610" to be "success or failure" Mar 21 21:40:05.962: INFO: Pod "pod-projected-configmaps-cfeb1400-aa30-443e-8227-e754d528a7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153709ms Mar 21 21:40:07.966: INFO: Pod "pod-projected-configmaps-cfeb1400-aa30-443e-8227-e754d528a7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008104554s Mar 21 21:40:09.970: INFO: Pod "pod-projected-configmaps-cfeb1400-aa30-443e-8227-e754d528a7bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0120914s STEP: Saw pod success Mar 21 21:40:09.970: INFO: Pod "pod-projected-configmaps-cfeb1400-aa30-443e-8227-e754d528a7bd" satisfied condition "success or failure" Mar 21 21:40:09.973: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-cfeb1400-aa30-443e-8227-e754d528a7bd container projected-configmap-volume-test: STEP: delete the pod Mar 21 21:40:10.021: INFO: Waiting for pod pod-projected-configmaps-cfeb1400-aa30-443e-8227-e754d528a7bd to disappear Mar 21 21:40:10.034: INFO: Pod pod-projected-configmaps-cfeb1400-aa30-443e-8227-e754d528a7bd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:40:10.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2610" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1730,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:40:10.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 21 21:40:10.824: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 21 21:40:12.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423610, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423610, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423610, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720423610, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:40:15.860: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:40:15.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:40:17.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1944" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.204 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":112,"skipped":1747,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:40:17.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:40:33.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8389" for this suite. • [SLOW TEST:16.105 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":113,"skipped":1747,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:40:33.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:40:33.453: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 21 21:40:33.491: INFO: Number of nodes with available pods: 0 Mar 21 21:40:33.491: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 21 21:40:33.519: INFO: Number of nodes with available pods: 0 Mar 21 21:40:33.519: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:34.523: INFO: Number of nodes with available pods: 0 Mar 21 21:40:34.523: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:35.523: INFO: Number of nodes with available pods: 0 Mar 21 21:40:35.524: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:36.523: INFO: Number of nodes with available pods: 0 Mar 21 21:40:36.523: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:37.524: INFO: Number of nodes with available pods: 1 Mar 21 21:40:37.524: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 21 21:40:37.556: INFO: Number of nodes with available pods: 1 Mar 21 21:40:37.556: INFO: Number of running nodes: 0, number of available pods: 1 Mar 21 21:40:38.559: INFO: Number of nodes with available pods: 0 Mar 21 21:40:38.559: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 21 21:40:38.565: INFO: Number of nodes with available pods: 0 Mar 21 21:40:38.565: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:39.569: INFO: Number of nodes with available pods: 0 Mar 21 21:40:39.569: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:40.569: INFO: Number of nodes with available pods: 0 Mar 21 21:40:40.569: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:41.569: INFO: Number of nodes with available pods: 0 Mar 21 21:40:41.569: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:42.569: INFO: Number of nodes with available pods: 0 Mar 21 21:40:42.569: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:43.569: INFO: Number of nodes with available pods: 0 Mar 21 21:40:43.569: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:44.569: INFO: Number of nodes with available pods: 0 Mar 21 21:40:44.569: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:45.569: INFO: Number of nodes with available pods: 0 Mar 21 21:40:45.569: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:46.568: INFO: Number of nodes with available pods: 0 Mar 21 21:40:46.568: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:47.569: INFO: Number of nodes with available pods: 0 Mar 21 21:40:47.569: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:48.568: INFO: Number of nodes with available pods: 0 Mar 21 21:40:48.568: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:49.571: INFO: Number of nodes with available pods: 0 Mar 21 21:40:49.571: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:50.570: INFO: Number of nodes with available pods: 0 Mar 21 21:40:50.570: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:51.588: INFO: Number of nodes with available pods: 0 Mar 21 21:40:51.588: INFO: Node jerma-worker is running more than one daemon pod Mar 21 21:40:52.569: INFO: Number of nodes with available pods: 1 Mar 21 21:40:52.569: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7508, will wait for the garbage collector to delete the pods Mar 21 21:40:52.634: INFO: Deleting DaemonSet.extensions daemon-set took: 6.129691ms Mar 21 21:40:52.934: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.239341ms Mar 21 21:40:55.738: INFO: Number of nodes with available pods: 0 Mar 21 21:40:55.738: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 21:40:55.740: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7508/daemonsets","resourceVersion":"1652190"},"items":null} Mar 21 21:40:55.743: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7508/pods","resourceVersion":"1652190"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:40:55.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7508" for this suite. • [SLOW TEST:22.483 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":114,"skipped":1769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:40:55.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:40:59.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5901" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1808,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:40:59.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-d974f262-d0f9-4074-b7f1-5f54d4abef77 STEP: Creating a pod to test consume secrets Mar 21 21:40:59.974: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-35d6301b-94d7-4cd1-9c10-46427388d7fa" in namespace "projected-5046" to be "success or failure" Mar 21 21:40:59.978: INFO: Pod "pod-projected-secrets-35d6301b-94d7-4cd1-9c10-46427388d7fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670834ms Mar 21 21:41:01.982: INFO: Pod "pod-projected-secrets-35d6301b-94d7-4cd1-9c10-46427388d7fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007742884s Mar 21 21:41:03.986: INFO: Pod "pod-projected-secrets-35d6301b-94d7-4cd1-9c10-46427388d7fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011886767s STEP: Saw pod success Mar 21 21:41:03.986: INFO: Pod "pod-projected-secrets-35d6301b-94d7-4cd1-9c10-46427388d7fa" satisfied condition "success or failure" Mar 21 21:41:03.990: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-35d6301b-94d7-4cd1-9c10-46427388d7fa container projected-secret-volume-test: STEP: delete the pod Mar 21 21:41:04.021: INFO: Waiting for pod pod-projected-secrets-35d6301b-94d7-4cd1-9c10-46427388d7fa to disappear Mar 21 21:41:04.032: INFO: Pod pod-projected-secrets-35d6301b-94d7-4cd1-9c10-46427388d7fa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:41:04.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5046" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:41:04.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-7b051839-5830-4f74-a24f-bf173814a979 in namespace container-probe-1234 Mar 21 21:41:08.126: INFO: Started pod test-webserver-7b051839-5830-4f74-a24f-bf173814a979 in namespace container-probe-1234 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 21:41:08.128: INFO: Initial restart count of pod test-webserver-7b051839-5830-4f74-a24f-bf173814a979 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:45:08.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1234" for this suite. • [SLOW TEST:244.730 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:45:08.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 21 21:45:13.433: INFO: Successfully updated pod "annotationupdated1169835-f481-4557-9f13-247294ee1979" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:45:15.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1658" for this suite. • [SLOW TEST:6.672 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:45:15.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 21 21:45:15.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6514' Mar 21 21:45:15.638: INFO: stderr: "" Mar 21 21:45:15.638: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 21 21:45:20.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6514 -o json' Mar 21 21:45:20.926: INFO: stderr: "" Mar 21 21:45:20.926: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-21T21:45:15Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6514\",\n \"resourceVersion\": \"1653071\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6514/pods/e2e-test-httpd-pod\",\n \"uid\": \"5c69f49b-654e-48a4-a9f4-6085893cd307\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-m9bl2\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-m9bl2\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-m9bl2\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-21T21:45:15Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-21T21:45:18Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-21T21:45:18Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-21T21:45:15Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c42dde511ee1089540f52b6fd05e9012b9190e6c87222cd6395cdd016936472b\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-21T21:45:18Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.170\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.170\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-21T21:45:15Z\"\n }\n}\n" STEP: replace the image in the pod Mar 21 21:45:20.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6514' Mar 21 21:45:21.270: INFO: stderr: "" Mar 21 21:45:21.271: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 21 21:45:21.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6514' Mar 21 21:45:29.226: INFO: stderr: "" Mar 21 21:45:29.226: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:45:29.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6514" for this suite. • [SLOW TEST:13.776 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":119,"skipped":1883,"failed":0} [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:45:29.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:45:29.303: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:45:33.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8217" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:45:33.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 21 21:45:33.634: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2741 /api/v1/namespaces/watch-2741/configmaps/e2e-watch-test-label-changed 5e0712a5-50d3-42a7-9438-61787f3b5ec5 1653153 0 2020-03-21 21:45:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 21:45:33.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2741 /api/v1/namespaces/watch-2741/configmaps/e2e-watch-test-label-changed 5e0712a5-50d3-42a7-9438-61787f3b5ec5 1653154 0 2020-03-21 21:45:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 21 21:45:33.634: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2741 /api/v1/namespaces/watch-2741/configmaps/e2e-watch-test-label-changed 5e0712a5-50d3-42a7-9438-61787f3b5ec5 1653156 0 2020-03-21 21:45:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 21 21:45:43.672: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2741 /api/v1/namespaces/watch-2741/configmaps/e2e-watch-test-label-changed 5e0712a5-50d3-42a7-9438-61787f3b5ec5 1653207 0 2020-03-21 21:45:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 21:45:43.673: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2741 /api/v1/namespaces/watch-2741/configmaps/e2e-watch-test-label-changed 5e0712a5-50d3-42a7-9438-61787f3b5ec5 1653208 0 2020-03-21 21:45:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 21 21:45:43.673: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2741 /api/v1/namespaces/watch-2741/configmaps/e2e-watch-test-label-changed 5e0712a5-50d3-42a7-9438-61787f3b5ec5 1653209 0 2020-03-21 21:45:33 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:45:43.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2741" for this suite. • [SLOW TEST:10.187 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":121,"skipped":1913,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:45:43.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:45:43.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6d22305-1327-411b-a49d-0c82cf37c08c" in namespace "downward-api-8974" to be "success or failure" Mar 21 21:45:43.786: INFO: Pod "downwardapi-volume-f6d22305-1327-411b-a49d-0c82cf37c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.407833ms Mar 21 21:45:45.791: INFO: Pod "downwardapi-volume-f6d22305-1327-411b-a49d-0c82cf37c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021810155s Mar 21 21:45:47.795: INFO: Pod "downwardapi-volume-f6d22305-1327-411b-a49d-0c82cf37c08c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025954246s STEP: Saw pod success Mar 21 21:45:47.795: INFO: Pod "downwardapi-volume-f6d22305-1327-411b-a49d-0c82cf37c08c" satisfied condition "success or failure" Mar 21 21:45:47.798: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f6d22305-1327-411b-a49d-0c82cf37c08c container client-container: STEP: delete the pod Mar 21 21:45:47.847: INFO: Waiting for pod downwardapi-volume-f6d22305-1327-411b-a49d-0c82cf37c08c to disappear Mar 21 21:45:47.867: INFO: Pod downwardapi-volume-f6d22305-1327-411b-a49d-0c82cf37c08c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:45:47.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8974" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1922,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:45:47.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:45:51.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5523" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1933,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:45:51.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:45:52.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39843eb9-e40b-4441-85f7-3324de7aeb54" in namespace "projected-3143" to be "success or failure" Mar 21 21:45:52.085: INFO: Pod "downwardapi-volume-39843eb9-e40b-4441-85f7-3324de7aeb54": Phase="Pending", Reason="", readiness=false. Elapsed: 18.785655ms Mar 21 21:45:54.089: INFO: Pod "downwardapi-volume-39843eb9-e40b-4441-85f7-3324de7aeb54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022461617s Mar 21 21:45:56.093: INFO: Pod "downwardapi-volume-39843eb9-e40b-4441-85f7-3324de7aeb54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026679374s STEP: Saw pod success Mar 21 21:45:56.093: INFO: Pod "downwardapi-volume-39843eb9-e40b-4441-85f7-3324de7aeb54" satisfied condition "success or failure" Mar 21 21:45:56.096: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-39843eb9-e40b-4441-85f7-3324de7aeb54 container client-container: STEP: delete the pod Mar 21 21:45:56.126: INFO: Waiting for pod downwardapi-volume-39843eb9-e40b-4441-85f7-3324de7aeb54 to disappear Mar 21 21:45:56.138: INFO: Pod downwardapi-volume-39843eb9-e40b-4441-85f7-3324de7aeb54 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:45:56.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3143" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1964,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:45:56.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 21 21:45:56.229: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 21:45:56.241: INFO: Waiting for terminating namespaces to be deleted... Mar 21 21:45:56.244: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 21 21:45:56.248: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:45:56.248: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:45:56.248: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:45:56.248: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 21:45:56.248: INFO: busybox-scheduling-86bf0a7b-3939-4292-8fee-a1700097330f from kubelet-test-5523 started at 2020-03-21 21:45:47 +0000 UTC (1 container statuses recorded) Mar 21 21:45:56.248: INFO: Container busybox-scheduling-86bf0a7b-3939-4292-8fee-a1700097330f ready: true, restart count 0 Mar 21 21:45:56.248: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 21 21:45:56.254: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:45:56.254: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:45:56.254: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:45:56.254: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 21:45:56.254: INFO: pod-exec-websocket-05372fac-23ad-4aba-9563-bd86c95307cc from pods-8217 started at 2020-03-21 21:45:29 +0000 UTC (1 container statuses recorded) Mar 21 21:45:56.254: INFO: Container main ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1b181ddd-4976-4114-b2f5-ef5efd2c1c43 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-1b181ddd-4976-4114-b2f5-ef5efd2c1c43 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-1b181ddd-4976-4114-b2f5-ef5efd2c1c43 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:51:04.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9531" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.326 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":125,"skipped":1970,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:51:04.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:51:08.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2897" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:51:08.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-bfedd57c-ab25-4f4d-9db0-072dd8aafc9a STEP: Creating a pod to test consume configMaps Mar 21 21:51:08.674: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39458391-b9bc-4bc6-8ed6-9085775de8ae" in namespace "projected-7841" to be "success or failure" Mar 21 21:51:08.677: INFO: Pod "pod-projected-configmaps-39458391-b9bc-4bc6-8ed6-9085775de8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.338365ms Mar 21 21:51:10.723: INFO: Pod "pod-projected-configmaps-39458391-b9bc-4bc6-8ed6-9085775de8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049097663s Mar 21 21:51:12.726: INFO: Pod "pod-projected-configmaps-39458391-b9bc-4bc6-8ed6-9085775de8ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052368523s STEP: Saw pod success Mar 21 21:51:12.726: INFO: Pod "pod-projected-configmaps-39458391-b9bc-4bc6-8ed6-9085775de8ae" satisfied condition "success or failure" Mar 21 21:51:12.728: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-39458391-b9bc-4bc6-8ed6-9085775de8ae container projected-configmap-volume-test: STEP: delete the pod Mar 21 21:51:12.774: INFO: Waiting for pod pod-projected-configmaps-39458391-b9bc-4bc6-8ed6-9085775de8ae to disappear Mar 21 21:51:12.791: INFO: Pod pod-projected-configmaps-39458391-b9bc-4bc6-8ed6-9085775de8ae no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:51:12.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7841" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2040,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:51:12.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 21 21:51:12.890: INFO: Waiting up to 5m0s for pod "pod-da50111b-8367-42eb-97e6-7e81046071c5" in namespace "emptydir-7232" to be "success or failure" Mar 21 21:51:12.905: INFO: Pod "pod-da50111b-8367-42eb-97e6-7e81046071c5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.099495ms Mar 21 21:51:14.920: INFO: Pod "pod-da50111b-8367-42eb-97e6-7e81046071c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030186801s Mar 21 21:51:16.932: INFO: Pod "pod-da50111b-8367-42eb-97e6-7e81046071c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042207583s STEP: Saw pod success Mar 21 21:51:16.932: INFO: Pod "pod-da50111b-8367-42eb-97e6-7e81046071c5" satisfied condition "success or failure" Mar 21 21:51:16.935: INFO: Trying to get logs from node jerma-worker2 pod pod-da50111b-8367-42eb-97e6-7e81046071c5 container test-container: STEP: delete the pod Mar 21 21:51:16.952: INFO: Waiting for pod pod-da50111b-8367-42eb-97e6-7e81046071c5 to disappear Mar 21 21:51:16.970: INFO: Pod pod-da50111b-8367-42eb-97e6-7e81046071c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:51:16.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7232" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2061,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:51:16.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 21 21:51:17.565: INFO: created pod pod-service-account-defaultsa Mar 21 21:51:17.565: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 21 21:51:17.568: INFO: created pod pod-service-account-mountsa Mar 21 21:51:17.568: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 21 21:51:17.596: INFO: created pod pod-service-account-nomountsa Mar 21 21:51:17.596: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 21 21:51:17.600: INFO: created pod pod-service-account-defaultsa-mountspec Mar 21 21:51:17.600: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 21 21:51:17.604: INFO: created pod pod-service-account-mountsa-mountspec Mar 21 21:51:17.604: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 21 21:51:17.631: INFO: created pod pod-service-account-nomountsa-mountspec Mar 21 21:51:17.631: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 21 21:51:17.672: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 21 21:51:17.672: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 21 21:51:17.729: INFO: created pod pod-service-account-mountsa-nomountspec Mar 21 21:51:17.729: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 21 21:51:17.784: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 21 21:51:17.784: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:51:17.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-411" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":129,"skipped":2073,"failed":0} S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:51:17.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:51:18.000: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 21 21:51:19.298: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:51:19.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-256" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":130,"skipped":2074,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:51:19.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:51:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-810" for this suite. • [SLOW TEST:14.817 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":131,"skipped":2083,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:51:34.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4388 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 21 21:51:34.786: INFO: Found 0 stateful pods, waiting for 3 Mar 21 21:51:44.790: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:51:44.790: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:51:44.790: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Mar 21 21:51:54.789: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:51:54.789: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:51:54.789: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 21 21:51:54.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4388 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 21:51:57.541: INFO: stderr: "I0321 21:51:57.427240 1903 log.go:172] (0xc0000f6fd0) (0xc0006dfea0) Create stream\nI0321 21:51:57.427285 1903 log.go:172] (0xc0000f6fd0) (0xc0006dfea0) Stream added, broadcasting: 1\nI0321 21:51:57.430334 1903 log.go:172] (0xc0000f6fd0) Reply frame received for 1\nI0321 21:51:57.430377 1903 log.go:172] (0xc0000f6fd0) (0xc0006dff40) Create stream\nI0321 21:51:57.430390 1903 log.go:172] (0xc0000f6fd0) (0xc0006dff40) Stream added, broadcasting: 3\nI0321 21:51:57.431524 1903 log.go:172] (0xc0000f6fd0) Reply frame received for 3\nI0321 21:51:57.431572 1903 log.go:172] (0xc0000f6fd0) (0xc000644640) Create stream\nI0321 21:51:57.431584 1903 log.go:172] (0xc0000f6fd0) (0xc000644640) Stream added, broadcasting: 5\nI0321 21:51:57.432465 1903 log.go:172] (0xc0000f6fd0) Reply frame received for 5\nI0321 21:51:57.500965 1903 log.go:172] (0xc0000f6fd0) Data frame received for 5\nI0321 21:51:57.500997 1903 log.go:172] (0xc000644640) (5) Data frame handling\nI0321 21:51:57.501018 1903 log.go:172] (0xc000644640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 21:51:57.532142 1903 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0321 21:51:57.532187 1903 log.go:172] (0xc0006dff40) (3) Data frame handling\nI0321 21:51:57.532220 1903 log.go:172] (0xc0006dff40) (3) Data frame sent\nI0321 21:51:57.532494 1903 log.go:172] (0xc0000f6fd0) Data frame received for 5\nI0321 21:51:57.532538 1903 log.go:172] (0xc0000f6fd0) Data frame received for 3\nI0321 21:51:57.532594 1903 log.go:172] (0xc0006dff40) (3) Data frame handling\nI0321 21:51:57.532629 1903 log.go:172] (0xc000644640) (5) Data frame handling\nI0321 21:51:57.534990 1903 log.go:172] (0xc0000f6fd0) Data frame received for 1\nI0321 21:51:57.535027 1903 log.go:172] (0xc0006dfea0) (1) Data frame handling\nI0321 21:51:57.535048 1903 log.go:172] (0xc0006dfea0) (1) Data frame sent\nI0321 21:51:57.535070 1903 log.go:172] (0xc0000f6fd0) (0xc0006dfea0) Stream removed, broadcasting: 1\nI0321 21:51:57.535111 1903 log.go:172] (0xc0000f6fd0) Go away received\nI0321 21:51:57.535576 1903 log.go:172] (0xc0000f6fd0) (0xc0006dfea0) Stream removed, broadcasting: 1\nI0321 21:51:57.535602 1903 log.go:172] (0xc0000f6fd0) (0xc0006dff40) Stream removed, broadcasting: 3\nI0321 21:51:57.535614 1903 log.go:172] (0xc0000f6fd0) (0xc000644640) Stream removed, broadcasting: 5\n" Mar 21 21:51:57.541: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 21:51:57.541: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 21 21:52:07.633: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 21 21:52:17.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4388 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:52:17.910: INFO: stderr: "I0321 21:52:17.823863 1936 log.go:172] (0xc0006eab00) (0xc0007ac140) Create stream\nI0321 21:52:17.823923 1936 log.go:172] (0xc0006eab00) (0xc0007ac140) Stream added, broadcasting: 1\nI0321 21:52:17.826414 1936 log.go:172] (0xc0006eab00) Reply frame received for 1\nI0321 21:52:17.826472 1936 log.go:172] (0xc0006eab00) (0xc0003c9900) Create stream\nI0321 21:52:17.826483 1936 log.go:172] (0xc0006eab00) (0xc0003c9900) Stream added, broadcasting: 3\nI0321 21:52:17.827468 1936 log.go:172] (0xc0006eab00) Reply frame received for 3\nI0321 21:52:17.827541 1936 log.go:172] (0xc0006eab00) (0xc0006bc000) Create stream\nI0321 21:52:17.827562 1936 log.go:172] (0xc0006eab00) (0xc0006bc000) Stream added, broadcasting: 5\nI0321 21:52:17.828433 1936 log.go:172] (0xc0006eab00) Reply frame received for 5\nI0321 21:52:17.902855 1936 log.go:172] (0xc0006eab00) Data frame received for 5\nI0321 21:52:17.902889 1936 log.go:172] (0xc0006bc000) (5) Data frame handling\nI0321 21:52:17.902898 1936 log.go:172] (0xc0006bc000) (5) Data frame sent\nI0321 21:52:17.902906 1936 log.go:172] (0xc0006eab00) Data frame received for 5\nI0321 21:52:17.902911 1936 log.go:172] (0xc0006bc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0321 21:52:17.902921 1936 log.go:172] (0xc0006eab00) Data frame received for 3\nI0321 21:52:17.902994 1936 log.go:172] (0xc0003c9900) (3) Data frame handling\nI0321 21:52:17.903030 1936 log.go:172] (0xc0003c9900) (3) Data frame sent\nI0321 21:52:17.903479 1936 log.go:172] (0xc0006eab00) Data frame received for 3\nI0321 21:52:17.903527 1936 log.go:172] (0xc0003c9900) (3) Data frame handling\nI0321 21:52:17.904987 1936 log.go:172] (0xc0006eab00) Data frame received for 1\nI0321 21:52:17.905004 1936 log.go:172] (0xc0007ac140) (1) Data frame handling\nI0321 21:52:17.905013 1936 log.go:172] (0xc0007ac140) (1) Data frame sent\nI0321 21:52:17.905029 1936 log.go:172] (0xc0006eab00) (0xc0007ac140) Stream removed, broadcasting: 1\nI0321 21:52:17.905043 1936 log.go:172] (0xc0006eab00) Go away received\nI0321 21:52:17.905743 1936 log.go:172] (0xc0006eab00) (0xc0007ac140) Stream removed, broadcasting: 1\nI0321 21:52:17.905773 1936 log.go:172] (0xc0006eab00) (0xc0003c9900) Stream removed, broadcasting: 3\nI0321 21:52:17.905792 1936 log.go:172] (0xc0006eab00) (0xc0006bc000) Stream removed, broadcasting: 5\n" Mar 21 21:52:17.910: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 21:52:17.910: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 21:52:27.955: INFO: Waiting for StatefulSet statefulset-4388/ss2 to complete update Mar 21 21:52:27.955: INFO: Waiting for Pod statefulset-4388/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 21 21:52:27.955: INFO: Waiting for Pod statefulset-4388/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 21 21:52:27.955: INFO: Waiting for Pod statefulset-4388/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 21 21:52:37.963: INFO: Waiting for StatefulSet statefulset-4388/ss2 to complete update Mar 21 21:52:37.963: INFO: Waiting for Pod statefulset-4388/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 21 21:52:47.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4388 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 21:52:48.230: INFO: stderr: "I0321 21:52:48.115300 1957 log.go:172] (0xc00092c6e0) (0xc000a26000) Create stream\nI0321 21:52:48.115361 1957 log.go:172] (0xc00092c6e0) (0xc000a26000) Stream added, broadcasting: 1\nI0321 21:52:48.118018 1957 log.go:172] (0xc00092c6e0) Reply frame received for 1\nI0321 21:52:48.118075 1957 log.go:172] (0xc00092c6e0) (0xc0006b9a40) Create stream\nI0321 21:52:48.118090 1957 log.go:172] (0xc00092c6e0) (0xc0006b9a40) Stream added, broadcasting: 3\nI0321 21:52:48.119225 1957 log.go:172] (0xc00092c6e0) Reply frame received for 3\nI0321 21:52:48.119284 1957 log.go:172] (0xc00092c6e0) (0xc000a260a0) Create stream\nI0321 21:52:48.119304 1957 log.go:172] (0xc00092c6e0) (0xc000a260a0) Stream added, broadcasting: 5\nI0321 21:52:48.120579 1957 log.go:172] (0xc00092c6e0) Reply frame received for 5\nI0321 21:52:48.197782 1957 log.go:172] (0xc00092c6e0) Data frame received for 5\nI0321 21:52:48.197814 1957 log.go:172] (0xc000a260a0) (5) Data frame handling\nI0321 21:52:48.197884 1957 log.go:172] (0xc000a260a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 21:52:48.224262 1957 log.go:172] (0xc00092c6e0) Data frame received for 3\nI0321 21:52:48.224426 1957 log.go:172] (0xc0006b9a40) (3) Data frame handling\nI0321 21:52:48.224446 1957 log.go:172] (0xc0006b9a40) (3) Data frame sent\nI0321 21:52:48.224466 1957 log.go:172] (0xc00092c6e0) Data frame received for 5\nI0321 21:52:48.224472 1957 log.go:172] (0xc000a260a0) (5) Data frame handling\nI0321 21:52:48.224744 1957 log.go:172] (0xc00092c6e0) Data frame received for 3\nI0321 21:52:48.224765 1957 log.go:172] (0xc0006b9a40) (3) Data frame handling\nI0321 21:52:48.227145 1957 log.go:172] (0xc00092c6e0) Data frame received for 1\nI0321 21:52:48.227162 1957 log.go:172] (0xc000a26000) (1) Data frame handling\nI0321 21:52:48.227178 1957 log.go:172] (0xc000a26000) (1) Data frame sent\nI0321 21:52:48.227304 1957 log.go:172] (0xc00092c6e0) (0xc000a26000) Stream removed, broadcasting: 1\nI0321 21:52:48.227366 1957 log.go:172] (0xc00092c6e0) Go away received\nI0321 21:52:48.227557 1957 log.go:172] (0xc00092c6e0) (0xc000a26000) Stream removed, broadcasting: 1\nI0321 21:52:48.227568 1957 log.go:172] (0xc00092c6e0) (0xc0006b9a40) Stream removed, broadcasting: 3\nI0321 21:52:48.227573 1957 log.go:172] (0xc00092c6e0) (0xc000a260a0) Stream removed, broadcasting: 5\n" Mar 21 21:52:48.230: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 21:52:48.230: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 21:52:58.262: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 21 21:53:08.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4388 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 21:53:08.507: INFO: stderr: "I0321 21:53:08.429719 1979 log.go:172] (0xc0000f4370) (0xc000604140) Create stream\nI0321 21:53:08.429781 1979 log.go:172] (0xc0000f4370) (0xc000604140) Stream added, broadcasting: 1\nI0321 21:53:08.431714 1979 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0321 21:53:08.431756 1979 log.go:172] (0xc0000f4370) (0xc0006041e0) Create stream\nI0321 21:53:08.431770 1979 log.go:172] (0xc0000f4370) (0xc0006041e0) Stream added, broadcasting: 3\nI0321 21:53:08.432751 1979 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0321 21:53:08.432793 1979 log.go:172] (0xc0000f4370) (0xc000604320) Create stream\nI0321 21:53:08.432811 1979 log.go:172] (0xc0000f4370) (0xc000604320) Stream added, broadcasting: 5\nI0321 21:53:08.434090 1979 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0321 21:53:08.501037 1979 log.go:172] (0xc0000f4370) Data frame received for 5\nI0321 21:53:08.501260 1979 log.go:172] (0xc000604320) (5) Data frame handling\nI0321 21:53:08.501300 1979 log.go:172] (0xc000604320) (5) Data frame sent\nI0321 21:53:08.501328 1979 log.go:172] (0xc0000f4370) Data frame received for 5\nI0321 21:53:08.501348 1979 log.go:172] (0xc000604320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0321 21:53:08.501409 1979 log.go:172] (0xc0000f4370) Data frame received for 3\nI0321 21:53:08.501440 1979 log.go:172] (0xc0006041e0) (3) Data frame handling\nI0321 21:53:08.501460 1979 log.go:172] (0xc0006041e0) (3) Data frame sent\nI0321 21:53:08.501474 1979 log.go:172] (0xc0000f4370) Data frame received for 3\nI0321 21:53:08.501484 1979 log.go:172] (0xc0006041e0) (3) Data frame handling\nI0321 21:53:08.502573 1979 log.go:172] (0xc0000f4370) Data frame received for 1\nI0321 21:53:08.502586 1979 log.go:172] (0xc000604140) (1) Data frame handling\nI0321 21:53:08.502599 1979 log.go:172] (0xc000604140) (1) Data frame sent\nI0321 21:53:08.502691 1979 log.go:172] (0xc0000f4370) (0xc000604140) Stream removed, broadcasting: 1\nI0321 21:53:08.502753 1979 log.go:172] (0xc0000f4370) Go away received\nI0321 21:53:08.503125 1979 log.go:172] (0xc0000f4370) (0xc000604140) Stream removed, broadcasting: 1\nI0321 21:53:08.503144 1979 log.go:172] (0xc0000f4370) (0xc0006041e0) Stream removed, broadcasting: 3\nI0321 21:53:08.503155 1979 log.go:172] (0xc0000f4370) (0xc000604320) Stream removed, broadcasting: 5\n" Mar 21 21:53:08.507: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 21:53:08.507: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 21 21:53:38.527: INFO: Deleting all statefulset in ns statefulset-4388 Mar 21 21:53:38.563: INFO: Scaling statefulset ss2 to 0 Mar 21 21:54:08.576: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 21:54:08.579: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:54:08.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4388" for this suite. • [SLOW TEST:153.915 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":132,"skipped":2087,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:54:08.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:54:09.059: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:54:11.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424449, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424449, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424449, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424449, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:54:14.101: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:54:14.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:54:14.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8697" for this suite. STEP: Destroying namespace "webhook-8697-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.339 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":133,"skipped":2096,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:54:14.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 21 21:54:19.078: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3468 PodName:pod-sharedvolume-c078f621-0692-4e0d-9824-f39eccacf519 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 21:54:19.078: INFO: >>> kubeConfig: /root/.kube/config I0321 21:54:19.111264 6 log.go:172] (0xc000db4dc0) (0xc001ac8000) Create stream I0321 21:54:19.111295 6 log.go:172] (0xc000db4dc0) (0xc001ac8000) Stream added, broadcasting: 1 I0321 21:54:19.113442 6 log.go:172] (0xc000db4dc0) Reply frame received for 1 I0321 21:54:19.113492 6 log.go:172] (0xc000db4dc0) (0xc0015f2f00) Create stream I0321 21:54:19.113509 6 log.go:172] (0xc000db4dc0) (0xc0015f2f00) Stream added, broadcasting: 3 I0321 21:54:19.114540 6 log.go:172] (0xc000db4dc0) Reply frame received for 3 I0321 21:54:19.114584 6 log.go:172] (0xc000db4dc0) (0xc001f72780) Create stream I0321 21:54:19.114602 6 log.go:172] (0xc000db4dc0) (0xc001f72780) Stream added, broadcasting: 5 I0321 21:54:19.115433 6 log.go:172] (0xc000db4dc0) Reply frame received for 5 I0321 21:54:19.190463 6 log.go:172] (0xc000db4dc0) Data frame received for 3 I0321 21:54:19.190508 6 log.go:172] (0xc0015f2f00) (3) Data frame handling I0321 21:54:19.190551 6 log.go:172] (0xc0015f2f00) (3) Data frame sent I0321 21:54:19.190570 6 log.go:172] (0xc000db4dc0) Data frame received for 3 I0321 21:54:19.190645 6 log.go:172] (0xc0015f2f00) (3) Data frame handling I0321 21:54:19.190670 6 log.go:172] (0xc000db4dc0) Data frame received for 5 I0321 21:54:19.190700 6 log.go:172] (0xc001f72780) (5) Data frame handling I0321 21:54:19.192413 6 log.go:172] (0xc000db4dc0) Data frame received for 1 I0321 21:54:19.192441 6 log.go:172] (0xc001ac8000) (1) Data frame handling I0321 21:54:19.192474 6 log.go:172] (0xc001ac8000) (1) Data frame sent I0321 21:54:19.192502 6 log.go:172] (0xc000db4dc0) (0xc001ac8000) Stream removed, broadcasting: 1 I0321 21:54:19.192522 6 log.go:172] (0xc000db4dc0) Go away received I0321 21:54:19.192692 6 log.go:172] (0xc000db4dc0) (0xc001ac8000) Stream removed, broadcasting: 1 I0321 21:54:19.192722 6 log.go:172] (0xc000db4dc0) (0xc0015f2f00) Stream removed, broadcasting: 3 I0321 21:54:19.192747 6 log.go:172] (0xc000db4dc0) (0xc001f72780) Stream removed, broadcasting: 5 Mar 21 21:54:19.192: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:54:19.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3468" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":134,"skipped":2099,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:54:19.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 21 21:54:19.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8521' Mar 21 21:54:19.351: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 21:54:19.351: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 21 21:54:23.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8521' Mar 21 21:54:23.505: INFO: stderr: "" Mar 21 21:54:23.505: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:54:23.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8521" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":135,"skipped":2099,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:54:23.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:54:23.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e82889b-f6fe-457f-bcac-e6a448760e12" in namespace "downward-api-3105" to be "success or failure" Mar 21 21:54:23.683: INFO: Pod "downwardapi-volume-8e82889b-f6fe-457f-bcac-e6a448760e12": Phase="Pending", Reason="", readiness=false. Elapsed: 42.517502ms Mar 21 21:54:25.687: INFO: Pod "downwardapi-volume-8e82889b-f6fe-457f-bcac-e6a448760e12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046441576s Mar 21 21:54:27.692: INFO: Pod "downwardapi-volume-8e82889b-f6fe-457f-bcac-e6a448760e12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051346214s STEP: Saw pod success Mar 21 21:54:27.692: INFO: Pod "downwardapi-volume-8e82889b-f6fe-457f-bcac-e6a448760e12" satisfied condition "success or failure" Mar 21 21:54:27.695: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8e82889b-f6fe-457f-bcac-e6a448760e12 container client-container: STEP: delete the pod Mar 21 21:54:27.742: INFO: Waiting for pod downwardapi-volume-8e82889b-f6fe-457f-bcac-e6a448760e12 to disappear Mar 21 21:54:27.755: INFO: Pod downwardapi-volume-8e82889b-f6fe-457f-bcac-e6a448760e12 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:54:27.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3105" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:54:27.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:54:28.305: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:54:30.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424468, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424468, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424468, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424468, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:54:33.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:54:43.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7260" for this suite. STEP: Destroying namespace "webhook-7260-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.823 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":137,"skipped":2139,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:54:43.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:54:54.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6932" for this suite. • [SLOW TEST:11.125 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":138,"skipped":2144,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:54:54.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:54:58.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5349" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:54:58.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-7255b271-0472-4964-9187-72a4dc9db560 STEP: Creating a pod to test consume configMaps Mar 21 21:54:58.892: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-258e031e-40ad-42e0-8087-3152374bfb9b" in namespace "projected-3297" to be "success or failure" Mar 21 21:54:58.910: INFO: Pod "pod-projected-configmaps-258e031e-40ad-42e0-8087-3152374bfb9b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.513481ms Mar 21 21:55:01.031: INFO: Pod "pod-projected-configmaps-258e031e-40ad-42e0-8087-3152374bfb9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139549182s Mar 21 21:55:03.036: INFO: Pod "pod-projected-configmaps-258e031e-40ad-42e0-8087-3152374bfb9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144104218s STEP: Saw pod success Mar 21 21:55:03.036: INFO: Pod "pod-projected-configmaps-258e031e-40ad-42e0-8087-3152374bfb9b" satisfied condition "success or failure" Mar 21 21:55:03.039: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-258e031e-40ad-42e0-8087-3152374bfb9b container projected-configmap-volume-test: STEP: delete the pod Mar 21 21:55:03.187: INFO: Waiting for pod pod-projected-configmaps-258e031e-40ad-42e0-8087-3152374bfb9b to disappear Mar 21 21:55:03.190: INFO: Pod pod-projected-configmaps-258e031e-40ad-42e0-8087-3152374bfb9b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:03.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3297" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2185,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:03.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 21 21:55:03.268: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:18.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5862" for this suite. • [SLOW TEST:15.482 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":141,"skipped":2190,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:18.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0321 21:55:30.257324 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 21:55:30.257: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:30.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8145" for this suite. • [SLOW TEST:11.573 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":142,"skipped":2192,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:30.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 21 21:55:30.308: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 21:55:30.387: INFO: Waiting for terminating namespaces to be deleted... Mar 21 21:55:30.390: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 21 21:55:30.397: INFO: simpletest-rc-to-be-deleted-bzbd6 from gc-8145 started at 2020-03-21 21:55:18 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.397: INFO: Container nginx ready: true, restart count 0 Mar 21 21:55:30.397: INFO: busybox-readonly-fseebdc3d7-c964-4db0-9546-12d1cca4bf86 from kubelet-test-5349 started at 2020-03-21 21:54:54 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.397: INFO: Container busybox-readonly-fseebdc3d7-c964-4db0-9546-12d1cca4bf86 ready: true, restart count 0 Mar 21 21:55:30.397: INFO: simpletest-rc-to-be-deleted-7tdsh from gc-8145 started at 2020-03-21 21:55:18 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.397: INFO: Container nginx ready: true, restart count 0 Mar 21 21:55:30.397: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.397: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:55:30.397: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.397: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 21:55:30.397: INFO: simpletest-rc-to-be-deleted-64vsx from gc-8145 started at 2020-03-21 21:55:19 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.397: INFO: Container nginx ready: true, restart count 0 Mar 21 21:55:30.397: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 21 21:55:30.401: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.401: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:55:30.401: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.401: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 21:55:30.401: INFO: simpletest-rc-to-be-deleted-hbf9r from gc-8145 started at 2020-03-21 21:55:19 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.401: INFO: Container nginx ready: true, restart count 0 Mar 21 21:55:30.401: INFO: simpletest-rc-to-be-deleted-5zjkt from gc-8145 started at 2020-03-21 21:55:19 +0000 UTC (1 container statuses recorded) Mar 21 21:55:30.401: INFO: Container nginx ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fe706c8dc53efa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:31.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8022" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":143,"skipped":2198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:31.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:31.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9844" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":144,"skipped":2231,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:31.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 21 21:55:31.638: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 21 21:55:32.079: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 21 21:55:34.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424532, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424532, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424532, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424532, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 21:55:36.933: INFO: Waited 600.048168ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:38.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9062" for this suite. • [SLOW TEST:6.735 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":145,"skipped":2236,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:38.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:55:38.974: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:55:40.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424538, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424538, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424539, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424538, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:55:44.013: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:44.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9605" for this suite. STEP: Destroying namespace "webhook-9605-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.938 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":146,"skipped":2250,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:44.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:44.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1739" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":147,"skipped":2259,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:44.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:55:44.609: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30414482-0fad-43a1-a9a9-7ed05fa696fb" in namespace "projected-9551" to be "success or failure" Mar 21 21:55:44.632: INFO: Pod "downwardapi-volume-30414482-0fad-43a1-a9a9-7ed05fa696fb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.29573ms Mar 21 21:55:46.636: INFO: Pod "downwardapi-volume-30414482-0fad-43a1-a9a9-7ed05fa696fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026344079s Mar 21 21:55:48.640: INFO: Pod "downwardapi-volume-30414482-0fad-43a1-a9a9-7ed05fa696fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030221871s STEP: Saw pod success Mar 21 21:55:48.640: INFO: Pod "downwardapi-volume-30414482-0fad-43a1-a9a9-7ed05fa696fb" satisfied condition "success or failure" Mar 21 21:55:48.643: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-30414482-0fad-43a1-a9a9-7ed05fa696fb container client-container: STEP: delete the pod Mar 21 21:55:48.675: INFO: Waiting for pod downwardapi-volume-30414482-0fad-43a1-a9a9-7ed05fa696fb to disappear Mar 21 21:55:48.686: INFO: Pod downwardapi-volume-30414482-0fad-43a1-a9a9-7ed05fa696fb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:48.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9551" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:48.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 21 21:55:48.753: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 21:55:48.771: INFO: Waiting for terminating namespaces to be deleted... Mar 21 21:55:48.774: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 21 21:55:48.780: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:55:48.780: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:55:48.780: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:55:48.780: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 21:55:48.780: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 21 21:55:48.785: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:55:48.785: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 21:55:48.785: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 21 21:55:48.785: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 21 21:55:48.862: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Mar 21 21:55:48.862: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Mar 21 21:55:48.862: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Mar 21 21:55:48.862: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 21 21:55:48.862: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 21 21:55:48.868: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-34bb08af-ed3a-41e3-810b-7082136adf4f.15fe7070dae64b18], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1287/filler-pod-34bb08af-ed3a-41e3-810b-7082136adf4f to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-34bb08af-ed3a-41e3-810b-7082136adf4f.15fe70715f1d06ff], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-34bb08af-ed3a-41e3-810b-7082136adf4f.15fe70718a215cb7], Reason = [Created], Message = [Created container filler-pod-34bb08af-ed3a-41e3-810b-7082136adf4f] STEP: Considering event: Type = [Normal], Name = [filler-pod-34bb08af-ed3a-41e3-810b-7082136adf4f.15fe707197f7d0da], Reason = [Started], Message = [Started container filler-pod-34bb08af-ed3a-41e3-810b-7082136adf4f] STEP: Considering event: Type = [Normal], Name = [filler-pod-bf23f2e8-3c10-4b18-9df8-c168fb6831ee.15fe7070dbdb70b1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1287/filler-pod-bf23f2e8-3c10-4b18-9df8-c168fb6831ee to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-bf23f2e8-3c10-4b18-9df8-c168fb6831ee.15fe7071250ecbd6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bf23f2e8-3c10-4b18-9df8-c168fb6831ee.15fe70716ad6d728], Reason = [Created], Message = [Created container filler-pod-bf23f2e8-3c10-4b18-9df8-c168fb6831ee] STEP: Considering event: Type = [Normal], Name = [filler-pod-bf23f2e8-3c10-4b18-9df8-c168fb6831ee.15fe707182ecfa4b], Reason = [Started], Message = [Started container filler-pod-bf23f2e8-3c10-4b18-9df8-c168fb6831ee] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fe7071cb581d94], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:55:54.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1287" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.346 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":149,"skipped":2290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:55:54.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-dc6998cb-3518-4e26-8816-ce926b87c8b7 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-dc6998cb-3518-4e26-8816-ce926b87c8b7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:56:02.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8725" for this suite. • [SLOW TEST:8.126 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2344,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:56:02.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 21:56:02.247: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b39105f-036b-49d7-9fa2-1013bae1e997" in namespace "projected-9826" to be "success or failure" Mar 21 21:56:02.265: INFO: Pod "downwardapi-volume-8b39105f-036b-49d7-9fa2-1013bae1e997": Phase="Pending", Reason="", readiness=false. Elapsed: 18.139725ms Mar 21 21:56:04.278: INFO: Pod "downwardapi-volume-8b39105f-036b-49d7-9fa2-1013bae1e997": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030698605s Mar 21 21:56:06.282: INFO: Pod "downwardapi-volume-8b39105f-036b-49d7-9fa2-1013bae1e997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035454679s STEP: Saw pod success Mar 21 21:56:06.282: INFO: Pod "downwardapi-volume-8b39105f-036b-49d7-9fa2-1013bae1e997" satisfied condition "success or failure" Mar 21 21:56:06.285: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8b39105f-036b-49d7-9fa2-1013bae1e997 container client-container: STEP: delete the pod Mar 21 21:56:06.304: INFO: Waiting for pod downwardapi-volume-8b39105f-036b-49d7-9fa2-1013bae1e997 to disappear Mar 21 21:56:06.309: INFO: Pod downwardapi-volume-8b39105f-036b-49d7-9fa2-1013bae1e997 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:56:06.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9826" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:56:06.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 21 21:56:06.365: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:56:21.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3359" for this suite. • [SLOW TEST:15.111 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":152,"skipped":2384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:56:21.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4965 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4965 STEP: creating replication controller externalsvc in namespace services-4965 I0321 21:56:21.589792 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4965, replica count: 2 I0321 21:56:24.640334 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 21:56:27.640583 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 21 21:56:27.713: INFO: Creating new exec pod Mar 21 21:56:31.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4965 execpodhvrwb -- /bin/sh -x -c nslookup nodeport-service' Mar 21 21:56:32.005: INFO: stderr: "I0321 21:56:31.881071 2042 log.go:172] (0xc000ab2000) (0xc00054c000) Create stream\nI0321 21:56:31.881223 2042 log.go:172] (0xc000ab2000) (0xc00054c000) Stream added, broadcasting: 1\nI0321 21:56:31.883224 2042 log.go:172] (0xc000ab2000) Reply frame received for 1\nI0321 21:56:31.883290 2042 log.go:172] (0xc000ab2000) (0xc00083fae0) Create stream\nI0321 21:56:31.883309 2042 log.go:172] (0xc000ab2000) (0xc00083fae0) Stream added, broadcasting: 3\nI0321 21:56:31.884402 2042 log.go:172] (0xc000ab2000) Reply frame received for 3\nI0321 21:56:31.884439 2042 log.go:172] (0xc000ab2000) (0xc00083fcc0) Create stream\nI0321 21:56:31.884453 2042 log.go:172] (0xc000ab2000) (0xc00083fcc0) Stream added, broadcasting: 5\nI0321 21:56:31.885880 2042 log.go:172] (0xc000ab2000) Reply frame received for 5\nI0321 21:56:31.985624 2042 log.go:172] (0xc000ab2000) Data frame received for 5\nI0321 21:56:31.985655 2042 log.go:172] (0xc00083fcc0) (5) Data frame handling\nI0321 21:56:31.985675 2042 log.go:172] (0xc00083fcc0) (5) Data frame sent\n+ nslookup nodeport-service\nI0321 21:56:31.995282 2042 log.go:172] (0xc000ab2000) Data frame received for 3\nI0321 21:56:31.995310 2042 log.go:172] (0xc00083fae0) (3) Data frame handling\nI0321 21:56:31.995331 2042 log.go:172] (0xc00083fae0) (3) Data frame sent\nI0321 21:56:31.996310 2042 log.go:172] (0xc000ab2000) Data frame received for 3\nI0321 21:56:31.996354 2042 log.go:172] (0xc00083fae0) (3) Data frame handling\nI0321 21:56:31.996387 2042 log.go:172] (0xc00083fae0) (3) Data frame sent\nI0321 21:56:31.996995 2042 log.go:172] (0xc000ab2000) Data frame received for 5\nI0321 21:56:31.997021 2042 log.go:172] (0xc00083fcc0) (5) Data frame handling\nI0321 21:56:31.997282 2042 log.go:172] (0xc000ab2000) Data frame received for 3\nI0321 21:56:31.997310 2042 log.go:172] (0xc00083fae0) (3) Data frame handling\nI0321 21:56:31.999463 2042 log.go:172] (0xc000ab2000) Data frame received for 1\nI0321 21:56:31.999502 2042 log.go:172] (0xc00054c000) (1) Data frame handling\nI0321 21:56:31.999534 2042 log.go:172] (0xc00054c000) (1) Data frame sent\nI0321 21:56:31.999576 2042 log.go:172] (0xc000ab2000) (0xc00054c000) Stream removed, broadcasting: 1\nI0321 21:56:31.999629 2042 log.go:172] (0xc000ab2000) Go away received\nI0321 21:56:32.000054 2042 log.go:172] (0xc000ab2000) (0xc00054c000) Stream removed, broadcasting: 1\nI0321 21:56:32.000080 2042 log.go:172] (0xc000ab2000) (0xc00083fae0) Stream removed, broadcasting: 3\nI0321 21:56:32.000093 2042 log.go:172] (0xc000ab2000) (0xc00083fcc0) Stream removed, broadcasting: 5\n" Mar 21 21:56:32.005: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4965.svc.cluster.local\tcanonical name = externalsvc.services-4965.svc.cluster.local.\nName:\texternalsvc.services-4965.svc.cluster.local\nAddress: 10.102.35.39\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4965, will wait for the garbage collector to delete the pods Mar 21 21:56:32.087: INFO: Deleting ReplicationController externalsvc took: 28.889122ms Mar 21 21:56:32.388: INFO: Terminating ReplicationController externalsvc pods took: 300.296226ms Mar 21 21:56:39.636: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:56:39.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4965" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.284 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":153,"skipped":2428,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:56:39.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:56:40.346: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:56:42.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424600, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424600, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424600, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424600, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:56:45.416: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:56:45.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8163" for this suite. STEP: Destroying namespace "webhook-8163-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.950 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":154,"skipped":2437,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:56:45.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 21 21:56:45.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4564' Mar 21 21:56:46.170: INFO: stderr: "" Mar 21 21:56:46.170: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 21 21:56:47.175: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:56:47.175: INFO: Found 0 / 1 Mar 21 21:56:48.200: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:56:48.200: INFO: Found 0 / 1 Mar 21 21:56:49.175: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:56:49.175: INFO: Found 0 / 1 Mar 21 21:56:50.176: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:56:50.176: INFO: Found 1 / 1 Mar 21 21:56:50.176: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 21 21:56:50.180: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:56:50.180: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 21 21:56:50.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-j85q8 --namespace=kubectl-4564 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 21 21:56:50.274: INFO: stderr: "" Mar 21 21:56:50.274: INFO: stdout: "pod/agnhost-master-j85q8 patched\n" STEP: checking annotations Mar 21 21:56:50.296: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 21:56:50.296: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:56:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4564" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":155,"skipped":2442,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:56:50.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 21 21:56:54.382: INFO: Pod pod-hostip-70254b7d-ce19-4b40-8bfd-24792fef5410 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:56:54.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5381" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2461,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:56:54.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 21 21:56:54.478: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-a f8bde18f-e404-4791-9364-35cd6b29e8db 1656945 0 2020-03-21 21:56:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 21:56:54.478: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-a f8bde18f-e404-4791-9364-35cd6b29e8db 1656945 0 2020-03-21 21:56:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 21 21:57:04.486: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-a f8bde18f-e404-4791-9364-35cd6b29e8db 1657003 0 2020-03-21 21:56:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 21 21:57:04.486: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-a f8bde18f-e404-4791-9364-35cd6b29e8db 1657003 0 2020-03-21 21:56:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 21 21:57:14.494: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-a f8bde18f-e404-4791-9364-35cd6b29e8db 1657036 0 2020-03-21 21:56:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 21:57:14.494: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-a f8bde18f-e404-4791-9364-35cd6b29e8db 1657036 0 2020-03-21 21:56:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 21 21:57:24.501: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-a f8bde18f-e404-4791-9364-35cd6b29e8db 1657066 0 2020-03-21 21:56:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 21:57:24.501: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-a f8bde18f-e404-4791-9364-35cd6b29e8db 1657066 0 2020-03-21 21:56:54 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 21 21:57:34.509: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-b 2bd417d2-b986-411c-aea2-cb0288129c0e 1657096 0 2020-03-21 21:57:34 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 21:57:34.509: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-b 2bd417d2-b986-411c-aea2-cb0288129c0e 1657096 0 2020-03-21 21:57:34 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 21 21:57:44.532: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-b 2bd417d2-b986-411c-aea2-cb0288129c0e 1657126 0 2020-03-21 21:57:34 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 21:57:44.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2457 /api/v1/namespaces/watch-2457/configmaps/e2e-watch-test-configmap-b 2bd417d2-b986-411c-aea2-cb0288129c0e 1657126 0 2020-03-21 21:57:34 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:57:54.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2457" for this suite. • [SLOW TEST:60.265 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":157,"skipped":2472,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:57:54.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 21 21:57:54.745: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:58:00.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7253" for this suite. • [SLOW TEST:6.151 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":158,"skipped":2489,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:58:00.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-599d66ac-683f-4f48-b968-df7f534e5fdb STEP: Creating a pod to test consume secrets Mar 21 21:58:00.873: INFO: Waiting up to 5m0s for pod "pod-secrets-67f0ee59-789b-410b-a77b-6e17ae1832a5" in namespace "secrets-6339" to be "success or failure" Mar 21 21:58:00.955: INFO: Pod "pod-secrets-67f0ee59-789b-410b-a77b-6e17ae1832a5": Phase="Pending", Reason="", readiness=false. Elapsed: 82.019524ms Mar 21 21:58:02.960: INFO: Pod "pod-secrets-67f0ee59-789b-410b-a77b-6e17ae1832a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086299922s Mar 21 21:58:04.965: INFO: Pod "pod-secrets-67f0ee59-789b-410b-a77b-6e17ae1832a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091550873s STEP: Saw pod success Mar 21 21:58:04.965: INFO: Pod "pod-secrets-67f0ee59-789b-410b-a77b-6e17ae1832a5" satisfied condition "success or failure" Mar 21 21:58:04.967: INFO: Trying to get logs from node jerma-worker pod pod-secrets-67f0ee59-789b-410b-a77b-6e17ae1832a5 container secret-volume-test: STEP: delete the pod Mar 21 21:58:04.999: INFO: Waiting for pod pod-secrets-67f0ee59-789b-410b-a77b-6e17ae1832a5 to disappear Mar 21 21:58:05.003: INFO: Pod pod-secrets-67f0ee59-789b-410b-a77b-6e17ae1832a5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:58:05.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6339" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2499,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:58:05.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-7tkp STEP: Creating a pod to test atomic-volume-subpath Mar 21 21:58:05.120: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7tkp" in namespace "subpath-3589" to be "success or failure" Mar 21 21:58:05.123: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.132469ms Mar 21 21:58:07.127: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007357308s Mar 21 21:58:09.131: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 4.011765123s Mar 21 21:58:11.136: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 6.015947543s Mar 21 21:58:13.140: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 8.020207754s Mar 21 21:58:15.144: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 10.023974938s Mar 21 21:58:17.148: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 12.028227722s Mar 21 21:58:19.152: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 14.032594663s Mar 21 21:58:21.157: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 16.037228738s Mar 21 21:58:23.168: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 18.048574711s Mar 21 21:58:25.172: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 20.051865463s Mar 21 21:58:27.176: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Running", Reason="", readiness=true. Elapsed: 22.05606671s Mar 21 21:58:29.180: INFO: Pod "pod-subpath-test-configmap-7tkp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060086286s STEP: Saw pod success Mar 21 21:58:29.180: INFO: Pod "pod-subpath-test-configmap-7tkp" satisfied condition "success or failure" Mar 21 21:58:29.183: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-7tkp container test-container-subpath-configmap-7tkp: STEP: delete the pod Mar 21 21:58:29.263: INFO: Waiting for pod pod-subpath-test-configmap-7tkp to disappear Mar 21 21:58:29.278: INFO: Pod pod-subpath-test-configmap-7tkp no longer exists STEP: Deleting pod pod-subpath-test-configmap-7tkp Mar 21 21:58:29.278: INFO: Deleting pod "pod-subpath-test-configmap-7tkp" in namespace "subpath-3589" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:58:29.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3589" for this suite. • [SLOW TEST:24.278 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":160,"skipped":2512,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:58:29.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 21 21:58:29.340: INFO: Waiting up to 5m0s for pod "pod-daa1a8a5-3ddd-440a-a526-2f0de4133050" in namespace "emptydir-2173" to be "success or failure" Mar 21 21:58:29.344: INFO: Pod "pod-daa1a8a5-3ddd-440a-a526-2f0de4133050": Phase="Pending", Reason="", readiness=false. Elapsed: 3.736125ms Mar 21 21:58:31.348: INFO: Pod "pod-daa1a8a5-3ddd-440a-a526-2f0de4133050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007808044s Mar 21 21:58:33.352: INFO: Pod "pod-daa1a8a5-3ddd-440a-a526-2f0de4133050": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012079459s STEP: Saw pod success Mar 21 21:58:33.352: INFO: Pod "pod-daa1a8a5-3ddd-440a-a526-2f0de4133050" satisfied condition "success or failure" Mar 21 21:58:33.355: INFO: Trying to get logs from node jerma-worker pod pod-daa1a8a5-3ddd-440a-a526-2f0de4133050 container test-container: STEP: delete the pod Mar 21 21:58:33.375: INFO: Waiting for pod pod-daa1a8a5-3ddd-440a-a526-2f0de4133050 to disappear Mar 21 21:58:33.379: INFO: Pod pod-daa1a8a5-3ddd-440a-a526-2f0de4133050 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:58:33.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2173" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2513,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:58:33.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 21 21:58:33.432: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 21 21:58:33.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3230' Mar 21 21:58:33.748: INFO: stderr: "" Mar 21 21:58:33.748: INFO: stdout: "service/agnhost-slave created\n" Mar 21 21:58:33.748: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 21 21:58:33.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3230' Mar 21 21:58:34.015: INFO: stderr: "" Mar 21 21:58:34.015: INFO: stdout: "service/agnhost-master created\n" Mar 21 21:58:34.015: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 21 21:58:34.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3230' Mar 21 21:58:34.274: INFO: stderr: "" Mar 21 21:58:34.274: INFO: stdout: "service/frontend created\n" Mar 21 21:58:34.274: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 21 21:58:34.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3230' Mar 21 21:58:34.525: INFO: stderr: "" Mar 21 21:58:34.525: INFO: stdout: "deployment.apps/frontend created\n" Mar 21 21:58:34.525: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 21 21:58:34.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3230' Mar 21 21:58:34.781: INFO: stderr: "" Mar 21 21:58:34.782: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 21 21:58:34.782: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 21 21:58:34.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3230' Mar 21 21:58:35.076: INFO: stderr: "" Mar 21 21:58:35.076: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 21 21:58:35.076: INFO: Waiting for all frontend pods to be Running. Mar 21 21:58:45.127: INFO: Waiting for frontend to serve content. Mar 21 21:58:45.136: INFO: Trying to add a new entry to the guestbook. Mar 21 21:58:45.180: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 21 21:58:45.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3230' Mar 21 21:58:45.452: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 21:58:45.452: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 21 21:58:45.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3230' Mar 21 21:58:45.614: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 21:58:45.614: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 21 21:58:45.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3230' Mar 21 21:58:45.737: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 21:58:45.737: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 21 21:58:45.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3230' Mar 21 21:58:45.833: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 21:58:45.833: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 21 21:58:45.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3230' Mar 21 21:58:45.942: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 21:58:45.942: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 21 21:58:45.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3230' Mar 21 21:58:46.050: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 21:58:46.050: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:58:46.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3230" for this suite. • [SLOW TEST:12.671 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":162,"skipped":2519,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:58:46.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-8c3178a7-6bfd-4990-a477-384086a2eff1 STEP: Creating a pod to test consume configMaps Mar 21 21:58:46.167: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcc5a087-8bac-4963-b19c-5fd425e5ffd8" in namespace "configmap-6715" to be "success or failure" Mar 21 21:58:46.189: INFO: Pod "pod-configmaps-dcc5a087-8bac-4963-b19c-5fd425e5ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.266834ms Mar 21 21:58:48.193: INFO: Pod "pod-configmaps-dcc5a087-8bac-4963-b19c-5fd425e5ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026115155s Mar 21 21:58:50.197: INFO: Pod "pod-configmaps-dcc5a087-8bac-4963-b19c-5fd425e5ffd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030009404s STEP: Saw pod success Mar 21 21:58:50.197: INFO: Pod "pod-configmaps-dcc5a087-8bac-4963-b19c-5fd425e5ffd8" satisfied condition "success or failure" Mar 21 21:58:50.199: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-dcc5a087-8bac-4963-b19c-5fd425e5ffd8 container configmap-volume-test: STEP: delete the pod Mar 21 21:58:50.305: INFO: Waiting for pod pod-configmaps-dcc5a087-8bac-4963-b19c-5fd425e5ffd8 to disappear Mar 21 21:58:50.364: INFO: Pod pod-configmaps-dcc5a087-8bac-4963-b19c-5fd425e5ffd8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:58:50.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6715" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2527,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:58:50.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 21:58:51.002: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 21:58:53.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424731, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424731, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424731, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720424730, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 21:58:56.061: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:58:56.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9243-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:58:57.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8754" for this suite. STEP: Destroying namespace "webhook-8754-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.000 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":164,"skipped":2545,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:58:57.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 21 21:59:00.505: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:59:00.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8357" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2563,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:59:00.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7119 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7119 I0321 21:59:00.681863 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7119, replica count: 2 I0321 21:59:03.732340 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 21:59:06.732849 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 21:59:06.732: INFO: Creating new exec pod Mar 21 21:59:11.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7119 execpod95gpp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 21 21:59:11.980: INFO: stderr: "I0321 21:59:11.879570 2355 log.go:172] (0xc0005dad10) (0xc000b0a000) Create stream\nI0321 21:59:11.879632 2355 log.go:172] (0xc0005dad10) (0xc000b0a000) Stream added, broadcasting: 1\nI0321 21:59:11.882206 2355 log.go:172] (0xc0005dad10) Reply frame received for 1\nI0321 21:59:11.882249 2355 log.go:172] (0xc0005dad10) (0xc0007ac000) Create stream\nI0321 21:59:11.882261 2355 log.go:172] (0xc0005dad10) (0xc0007ac000) Stream added, broadcasting: 3\nI0321 21:59:11.883233 2355 log.go:172] (0xc0005dad10) Reply frame received for 3\nI0321 21:59:11.883259 2355 log.go:172] (0xc0005dad10) (0xc000657ae0) Create stream\nI0321 21:59:11.883269 2355 log.go:172] (0xc0005dad10) (0xc000657ae0) Stream added, broadcasting: 5\nI0321 21:59:11.884359 2355 log.go:172] (0xc0005dad10) Reply frame received for 5\nI0321 21:59:11.974106 2355 log.go:172] (0xc0005dad10) Data frame received for 5\nI0321 21:59:11.974136 2355 log.go:172] (0xc000657ae0) (5) Data frame handling\nI0321 21:59:11.974156 2355 log.go:172] (0xc000657ae0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0321 21:59:11.975029 2355 log.go:172] (0xc0005dad10) Data frame received for 5\nI0321 21:59:11.975057 2355 log.go:172] (0xc000657ae0) (5) Data frame handling\nI0321 21:59:11.975078 2355 log.go:172] (0xc000657ae0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0321 21:59:11.975226 2355 log.go:172] (0xc0005dad10) Data frame received for 3\nI0321 21:59:11.975248 2355 log.go:172] (0xc0007ac000) (3) Data frame handling\nI0321 21:59:11.975452 2355 log.go:172] (0xc0005dad10) Data frame received for 5\nI0321 21:59:11.975462 2355 log.go:172] (0xc000657ae0) (5) Data frame handling\nI0321 21:59:11.977224 2355 log.go:172] (0xc0005dad10) Data frame received for 1\nI0321 21:59:11.977240 2355 log.go:172] (0xc000b0a000) (1) Data frame handling\nI0321 21:59:11.977248 2355 log.go:172] (0xc000b0a000) (1) Data frame sent\nI0321 21:59:11.977388 2355 log.go:172] (0xc0005dad10) (0xc000b0a000) Stream removed, broadcasting: 1\nI0321 21:59:11.977471 2355 log.go:172] (0xc0005dad10) Go away received\nI0321 21:59:11.977633 2355 log.go:172] (0xc0005dad10) (0xc000b0a000) Stream removed, broadcasting: 1\nI0321 21:59:11.977647 2355 log.go:172] (0xc0005dad10) (0xc0007ac000) Stream removed, broadcasting: 3\nI0321 21:59:11.977653 2355 log.go:172] (0xc0005dad10) (0xc000657ae0) Stream removed, broadcasting: 5\n" Mar 21 21:59:11.981: INFO: stdout: "" Mar 21 21:59:11.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7119 execpod95gpp -- /bin/sh -x -c nc -zv -t -w 2 10.100.223.129 80' Mar 21 21:59:12.216: INFO: stderr: "I0321 21:59:12.131014 2376 log.go:172] (0xc0000f51e0) (0xc0006a9a40) Create stream\nI0321 21:59:12.131079 2376 log.go:172] (0xc0000f51e0) (0xc0006a9a40) Stream added, broadcasting: 1\nI0321 21:59:12.134513 2376 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0321 21:59:12.134552 2376 log.go:172] (0xc0000f51e0) (0xc000922000) Create stream\nI0321 21:59:12.134561 2376 log.go:172] (0xc0000f51e0) (0xc000922000) Stream added, broadcasting: 3\nI0321 21:59:12.135513 2376 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0321 21:59:12.135536 2376 log.go:172] (0xc0000f51e0) (0xc000212000) Create stream\nI0321 21:59:12.135544 2376 log.go:172] (0xc0000f51e0) (0xc000212000) Stream added, broadcasting: 5\nI0321 21:59:12.136369 2376 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0321 21:59:12.209473 2376 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0321 21:59:12.209522 2376 log.go:172] (0xc000922000) (3) Data frame handling\nI0321 21:59:12.209550 2376 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0321 21:59:12.209566 2376 log.go:172] (0xc000212000) (5) Data frame handling\nI0321 21:59:12.209583 2376 log.go:172] (0xc000212000) (5) Data frame sent\nI0321 21:59:12.209591 2376 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0321 21:59:12.209597 2376 log.go:172] (0xc000212000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.223.129 80\nConnection to 10.100.223.129 80 port [tcp/http] succeeded!\nI0321 21:59:12.210879 2376 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0321 21:59:12.210912 2376 log.go:172] (0xc0006a9a40) (1) Data frame handling\nI0321 21:59:12.210951 2376 log.go:172] (0xc0006a9a40) (1) Data frame sent\nI0321 21:59:12.211075 2376 log.go:172] (0xc0000f51e0) (0xc0006a9a40) Stream removed, broadcasting: 1\nI0321 21:59:12.211266 2376 log.go:172] (0xc0000f51e0) Go away received\nI0321 21:59:12.211610 2376 log.go:172] (0xc0000f51e0) (0xc0006a9a40) Stream removed, broadcasting: 1\nI0321 21:59:12.211634 2376 log.go:172] (0xc0000f51e0) (0xc000922000) Stream removed, broadcasting: 3\nI0321 21:59:12.211647 2376 log.go:172] (0xc0000f51e0) (0xc000212000) Stream removed, broadcasting: 5\n" Mar 21 21:59:12.216: INFO: stdout: "" Mar 21 21:59:12.216: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:59:12.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7119" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.057 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":166,"skipped":2579,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:59:12.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-b6a381ac-9960-408f-9bb2-4b0b2dc496e1 STEP: Creating a pod to test consume secrets Mar 21 21:59:13.352: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd091ab0-4970-4e40-a21b-7a96f7a352e3" in namespace "projected-881" to be "success or failure" Mar 21 21:59:13.401: INFO: Pod "pod-projected-secrets-cd091ab0-4970-4e40-a21b-7a96f7a352e3": Phase="Pending", Reason="", readiness=false. Elapsed: 48.602789ms Mar 21 21:59:15.405: INFO: Pod "pod-projected-secrets-cd091ab0-4970-4e40-a21b-7a96f7a352e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053064929s Mar 21 21:59:17.425: INFO: Pod "pod-projected-secrets-cd091ab0-4970-4e40-a21b-7a96f7a352e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07298754s STEP: Saw pod success Mar 21 21:59:17.425: INFO: Pod "pod-projected-secrets-cd091ab0-4970-4e40-a21b-7a96f7a352e3" satisfied condition "success or failure" Mar 21 21:59:17.428: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-cd091ab0-4970-4e40-a21b-7a96f7a352e3 container projected-secret-volume-test: STEP: delete the pod Mar 21 21:59:17.444: INFO: Waiting for pod pod-projected-secrets-cd091ab0-4970-4e40-a21b-7a96f7a352e3 to disappear Mar 21 21:59:17.455: INFO: Pod pod-projected-secrets-cd091ab0-4970-4e40-a21b-7a96f7a352e3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:59:17.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-881" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2583,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:59:17.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 21 21:59:17.516: INFO: Waiting up to 5m0s for pod "client-containers-00a63889-68c7-4670-b157-d3cad7d3f049" in namespace "containers-395" to be "success or failure" Mar 21 21:59:17.556: INFO: Pod "client-containers-00a63889-68c7-4670-b157-d3cad7d3f049": Phase="Pending", Reason="", readiness=false. Elapsed: 39.32062ms Mar 21 21:59:19.563: INFO: Pod "client-containers-00a63889-68c7-4670-b157-d3cad7d3f049": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046388755s Mar 21 21:59:21.567: INFO: Pod "client-containers-00a63889-68c7-4670-b157-d3cad7d3f049": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050580722s STEP: Saw pod success Mar 21 21:59:21.567: INFO: Pod "client-containers-00a63889-68c7-4670-b157-d3cad7d3f049" satisfied condition "success or failure" Mar 21 21:59:21.570: INFO: Trying to get logs from node jerma-worker2 pod client-containers-00a63889-68c7-4670-b157-d3cad7d3f049 container test-container: STEP: delete the pod Mar 21 21:59:21.594: INFO: Waiting for pod client-containers-00a63889-68c7-4670-b157-d3cad7d3f049 to disappear Mar 21 21:59:21.599: INFO: Pod client-containers-00a63889-68c7-4670-b157-d3cad7d3f049 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:59:21.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-395" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:59:21.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:59:32.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3957" for this suite. • [SLOW TEST:11.219 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":169,"skipped":2611,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:59:32.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 21:59:32.888: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.325764ms) Mar 21 21:59:32.934: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 46.472022ms) Mar 21 21:59:32.942: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 7.745288ms) Mar 21 21:59:32.945: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.395447ms) Mar 21 21:59:32.949: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.521025ms) Mar 21 21:59:32.953: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.505031ms) Mar 21 21:59:32.956: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.604943ms) Mar 21 21:59:32.960: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.315307ms) Mar 21 21:59:32.964: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.993612ms) Mar 21 21:59:32.967: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.384229ms) Mar 21 21:59:32.970: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.770045ms) Mar 21 21:59:32.973: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.891218ms) Mar 21 21:59:32.976: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.016564ms) Mar 21 21:59:32.979: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.049659ms) Mar 21 21:59:32.983: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.467078ms) Mar 21 21:59:32.986: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.381486ms) Mar 21 21:59:32.989: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.404668ms) Mar 21 21:59:32.993: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.550148ms) Mar 21 21:59:32.997: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.781033ms) Mar 21 21:59:33.004: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 7.086288ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 21:59:33.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8816" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":170,"skipped":2621,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 21:59:33.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1631 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1631;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1631 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1631;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1631.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1631.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1631.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1631.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1631.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1631.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1631.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1631.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1631.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1631.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 111.187.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.187.111_udp@PTR;check="$$(dig +tcp +noall +answer +search 111.187.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.187.111_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1631 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1631;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1631 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1631;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1631.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1631.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1631.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1631.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1631.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1631.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1631.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1631.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1631.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1631.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1631.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1631.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 111.187.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.187.111_udp@PTR;check="$$(dig +tcp +noall +answer +search 111.187.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.187.111_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 21:59:39.214: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.217: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.221: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.224: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.227: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.229: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.232: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.235: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.255: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.258: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.261: INFO: Unable to read jessie_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.263: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.265: INFO: Unable to read jessie_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.268: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.270: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.273: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:39.294: INFO: Lookups using dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1631 wheezy_tcp@dns-test-service.dns-1631 wheezy_udp@dns-test-service.dns-1631.svc wheezy_tcp@dns-test-service.dns-1631.svc wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1631 jessie_tcp@dns-test-service.dns-1631 jessie_udp@dns-test-service.dns-1631.svc jessie_tcp@dns-test-service.dns-1631.svc jessie_udp@_http._tcp.dns-test-service.dns-1631.svc jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc] Mar 21 21:59:44.300: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.304: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.308: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.311: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.315: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.322: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.325: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.348: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.352: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.355: INFO: Unable to read jessie_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.359: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.362: INFO: Unable to read jessie_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.364: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.367: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.370: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:44.388: INFO: Lookups using dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1631 wheezy_tcp@dns-test-service.dns-1631 wheezy_udp@dns-test-service.dns-1631.svc wheezy_tcp@dns-test-service.dns-1631.svc wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1631 jessie_tcp@dns-test-service.dns-1631 jessie_udp@dns-test-service.dns-1631.svc jessie_tcp@dns-test-service.dns-1631.svc jessie_udp@_http._tcp.dns-test-service.dns-1631.svc jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc] Mar 21 21:59:49.299: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.303: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.313: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.317: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.320: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.324: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.349: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.351: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.354: INFO: Unable to read jessie_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.356: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.358: INFO: Unable to read jessie_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.360: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.362: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.364: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:49.380: INFO: Lookups using dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1631 wheezy_tcp@dns-test-service.dns-1631 wheezy_udp@dns-test-service.dns-1631.svc wheezy_tcp@dns-test-service.dns-1631.svc wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1631 jessie_tcp@dns-test-service.dns-1631 jessie_udp@dns-test-service.dns-1631.svc jessie_tcp@dns-test-service.dns-1631.svc jessie_udp@_http._tcp.dns-test-service.dns-1631.svc jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc] Mar 21 21:59:54.300: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.304: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.311: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.314: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.317: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.321: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.324: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.347: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.350: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.354: INFO: Unable to read jessie_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.360: INFO: Unable to read jessie_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.363: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.366: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.369: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:54.390: INFO: Lookups using dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1631 wheezy_tcp@dns-test-service.dns-1631 wheezy_udp@dns-test-service.dns-1631.svc wheezy_tcp@dns-test-service.dns-1631.svc wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1631 jessie_tcp@dns-test-service.dns-1631 jessie_udp@dns-test-service.dns-1631.svc jessie_tcp@dns-test-service.dns-1631.svc jessie_udp@_http._tcp.dns-test-service.dns-1631.svc jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc] Mar 21 21:59:59.300: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.304: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.312: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.315: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.318: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.335: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.357: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.360: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.363: INFO: Unable to read jessie_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.365: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.368: INFO: Unable to read jessie_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.370: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.373: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.376: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 21:59:59.394: INFO: Lookups using dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1631 wheezy_tcp@dns-test-service.dns-1631 wheezy_udp@dns-test-service.dns-1631.svc wheezy_tcp@dns-test-service.dns-1631.svc wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1631 jessie_tcp@dns-test-service.dns-1631 jessie_udp@dns-test-service.dns-1631.svc jessie_tcp@dns-test-service.dns-1631.svc jessie_udp@_http._tcp.dns-test-service.dns-1631.svc jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc] Mar 21 22:00:04.300: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.304: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.320: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.326: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.330: INFO: Unable to read wheezy_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.336: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.339: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.343: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.369: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.371: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.372: INFO: Unable to read jessie_udp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631 from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.377: INFO: Unable to read jessie_udp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.379: INFO: Unable to read jessie_tcp@dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.381: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.383: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc from pod dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366: the server could not find the requested resource (get pods dns-test-93993852-81a6-4a64-a008-cb0c98847366) Mar 21 22:00:04.397: INFO: Lookups using dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1631 wheezy_tcp@dns-test-service.dns-1631 wheezy_udp@dns-test-service.dns-1631.svc wheezy_tcp@dns-test-service.dns-1631.svc wheezy_udp@_http._tcp.dns-test-service.dns-1631.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1631.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1631 jessie_tcp@dns-test-service.dns-1631 jessie_udp@dns-test-service.dns-1631.svc jessie_tcp@dns-test-service.dns-1631.svc jessie_udp@_http._tcp.dns-test-service.dns-1631.svc jessie_tcp@_http._tcp.dns-test-service.dns-1631.svc] Mar 21 22:00:09.388: INFO: DNS probes using dns-1631/dns-test-93993852-81a6-4a64-a008-cb0c98847366 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:09.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1631" for this suite. • [SLOW TEST:36.993 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":171,"skipped":2640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:10.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-3a5a50e8-db98-4adf-a05e-802ffd9eec34 STEP: Creating a pod to test consume configMaps Mar 21 22:00:10.160: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0" in namespace "projected-2707" to be "success or failure" Mar 21 22:00:10.163: INFO: Pod "pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.098902ms Mar 21 22:00:12.167: INFO: Pod "pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007592242s Mar 21 22:00:14.660: INFO: Pod "pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500094336s Mar 21 22:00:16.664: INFO: Pod "pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.504272016s STEP: Saw pod success Mar 21 22:00:16.664: INFO: Pod "pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0" satisfied condition "success or failure" Mar 21 22:00:16.666: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0 container projected-configmap-volume-test: STEP: delete the pod Mar 21 22:00:16.705: INFO: Waiting for pod pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0 to disappear Mar 21 22:00:16.720: INFO: Pod pod-projected-configmaps-0e9d6159-b0bd-48b7-ac1a-8700b46ecdf0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:16.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2707" for this suite. • [SLOW TEST:6.799 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:16.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 21 22:00:16.866: INFO: Waiting up to 5m0s for pod "downward-api-cbe3d2e2-dd06-48a5-8b65-d289a8e27f7a" in namespace "downward-api-5628" to be "success or failure" Mar 21 22:00:16.923: INFO: Pod "downward-api-cbe3d2e2-dd06-48a5-8b65-d289a8e27f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 57.266533ms Mar 21 22:00:18.928: INFO: Pod "downward-api-cbe3d2e2-dd06-48a5-8b65-d289a8e27f7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061593753s Mar 21 22:00:20.931: INFO: Pod "downward-api-cbe3d2e2-dd06-48a5-8b65-d289a8e27f7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065056932s STEP: Saw pod success Mar 21 22:00:20.931: INFO: Pod "downward-api-cbe3d2e2-dd06-48a5-8b65-d289a8e27f7a" satisfied condition "success or failure" Mar 21 22:00:20.934: INFO: Trying to get logs from node jerma-worker2 pod downward-api-cbe3d2e2-dd06-48a5-8b65-d289a8e27f7a container dapi-container: STEP: delete the pod Mar 21 22:00:20.962: INFO: Waiting for pod downward-api-cbe3d2e2-dd06-48a5-8b65-d289a8e27f7a to disappear Mar 21 22:00:20.978: INFO: Pod downward-api-cbe3d2e2-dd06-48a5-8b65-d289a8e27f7a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:20.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5628" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2703,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:20.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 21 22:00:21.146: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 21 22:00:26.273: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:26.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5028" for this suite. • [SLOW TEST:5.387 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":174,"skipped":2715,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:26.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0321 22:00:36.522501 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 22:00:36.522: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:36.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9868" for this suite. • [SLOW TEST:10.144 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":175,"skipped":2730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:36.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 21 22:00:36.585: INFO: Waiting up to 5m0s for pod "var-expansion-41519961-628f-463c-9501-3331d5b2c964" in namespace "var-expansion-8889" to be "success or failure" Mar 21 22:00:36.616: INFO: Pod "var-expansion-41519961-628f-463c-9501-3331d5b2c964": Phase="Pending", Reason="", readiness=false. Elapsed: 31.48043ms Mar 21 22:00:38.646: INFO: Pod "var-expansion-41519961-628f-463c-9501-3331d5b2c964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061605972s Mar 21 22:00:40.651: INFO: Pod "var-expansion-41519961-628f-463c-9501-3331d5b2c964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066015806s STEP: Saw pod success Mar 21 22:00:40.651: INFO: Pod "var-expansion-41519961-628f-463c-9501-3331d5b2c964" satisfied condition "success or failure" Mar 21 22:00:40.656: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-41519961-628f-463c-9501-3331d5b2c964 container dapi-container: STEP: delete the pod Mar 21 22:00:40.690: INFO: Waiting for pod var-expansion-41519961-628f-463c-9501-3331d5b2c964 to disappear Mar 21 22:00:40.703: INFO: Pod var-expansion-41519961-628f-463c-9501-3331d5b2c964 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:40.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8889" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2764,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:40.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9acb1f40-6545-4f57-b92d-deb20c5a7e9e STEP: Creating a pod to test consume secrets Mar 21 22:00:40.820: INFO: Waiting up to 5m0s for pod "pod-secrets-2db30971-4c19-4c49-85b5-f66f8f249de7" in namespace "secrets-6800" to be "success or failure" Mar 21 22:00:40.828: INFO: Pod "pod-secrets-2db30971-4c19-4c49-85b5-f66f8f249de7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127732ms Mar 21 22:00:42.833: INFO: Pod "pod-secrets-2db30971-4c19-4c49-85b5-f66f8f249de7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01278042s Mar 21 22:00:44.837: INFO: Pod "pod-secrets-2db30971-4c19-4c49-85b5-f66f8f249de7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017059214s STEP: Saw pod success Mar 21 22:00:44.837: INFO: Pod "pod-secrets-2db30971-4c19-4c49-85b5-f66f8f249de7" satisfied condition "success or failure" Mar 21 22:00:44.840: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2db30971-4c19-4c49-85b5-f66f8f249de7 container secret-env-test: STEP: delete the pod Mar 21 22:00:44.866: INFO: Waiting for pod pod-secrets-2db30971-4c19-4c49-85b5-f66f8f249de7 to disappear Mar 21 22:00:44.871: INFO: Pod pod-secrets-2db30971-4c19-4c49-85b5-f66f8f249de7 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:44.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6800" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:44.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 21 22:00:44.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1638' Mar 21 22:00:45.054: INFO: stderr: "" Mar 21 22:00:45.054: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 21 22:00:45.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1638' Mar 21 22:00:49.497: INFO: stderr: "" Mar 21 22:00:49.497: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:49.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1638" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":178,"skipped":2808,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:49.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 21 22:00:49.579: INFO: Waiting up to 5m0s for pod "client-containers-f2a83d3c-a561-472b-97a1-3dcb849fad51" in namespace "containers-9901" to be "success or failure" Mar 21 22:00:49.629: INFO: Pod "client-containers-f2a83d3c-a561-472b-97a1-3dcb849fad51": Phase="Pending", Reason="", readiness=false. Elapsed: 49.094836ms Mar 21 22:00:51.632: INFO: Pod "client-containers-f2a83d3c-a561-472b-97a1-3dcb849fad51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052756776s Mar 21 22:00:53.659: INFO: Pod "client-containers-f2a83d3c-a561-472b-97a1-3dcb849fad51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079558985s STEP: Saw pod success Mar 21 22:00:53.659: INFO: Pod "client-containers-f2a83d3c-a561-472b-97a1-3dcb849fad51" satisfied condition "success or failure" Mar 21 22:00:53.662: INFO: Trying to get logs from node jerma-worker2 pod client-containers-f2a83d3c-a561-472b-97a1-3dcb849fad51 container test-container: STEP: delete the pod Mar 21 22:00:53.680: INFO: Waiting for pod client-containers-f2a83d3c-a561-472b-97a1-3dcb849fad51 to disappear Mar 21 22:00:53.685: INFO: Pod client-containers-f2a83d3c-a561-472b-97a1-3dcb849fad51 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:00:53.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9901" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:00:53.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-5c3cdefd-b165-4237-8437-072114740f29 STEP: Creating configMap with name cm-test-opt-upd-ab885f12-dbb5-42dd-b8d6-00c229c74556 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5c3cdefd-b165-4237-8437-072114740f29 STEP: Updating configmap cm-test-opt-upd-ab885f12-dbb5-42dd-b8d6-00c229c74556 STEP: Creating configMap with name cm-test-opt-create-8176f30e-e9d3-4f05-a81c-33362dff5b96 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:02:08.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4971" for this suite. • [SLOW TEST:74.502 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2845,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:02:08.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-98f27f4b-7629-42a5-9af4-9f974c429802 STEP: Creating a pod to test consume configMaps Mar 21 22:02:08.283: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b" in namespace "projected-7099" to be "success or failure" Mar 21 22:02:08.292: INFO: Pod "pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.55219ms Mar 21 22:02:10.297: INFO: Pod "pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014144663s Mar 21 22:02:12.301: INFO: Pod "pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b": Phase="Running", Reason="", readiness=true. Elapsed: 4.017869434s Mar 21 22:02:14.305: INFO: Pod "pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022188538s STEP: Saw pod success Mar 21 22:02:14.305: INFO: Pod "pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b" satisfied condition "success or failure" Mar 21 22:02:14.308: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b container projected-configmap-volume-test: STEP: delete the pod Mar 21 22:02:14.368: INFO: Waiting for pod pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b to disappear Mar 21 22:02:14.376: INFO: Pod pod-projected-configmaps-cd60c1c6-8f00-42b3-bc19-a073b4d0918b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:02:14.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7099" for this suite. • [SLOW TEST:6.192 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2848,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:02:14.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 21 22:02:14.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7291' Mar 21 22:02:17.336: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 22:02:17.336: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 21 22:02:17.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7291' Mar 21 22:02:17.474: INFO: stderr: "" Mar 21 22:02:17.475: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:02:17.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7291" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":182,"skipped":2864,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:02:17.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e124d7ee-6166-4ca4-95fe-2935dc31b3d3 STEP: Creating a pod to test consume configMaps Mar 21 22:02:17.570: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc92d89a-e8da-4b03-9642-70e36d4be1ab" in namespace "configmap-6178" to be "success or failure" Mar 21 22:02:17.572: INFO: Pod "pod-configmaps-fc92d89a-e8da-4b03-9642-70e36d4be1ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40835ms Mar 21 22:02:19.606: INFO: Pod "pod-configmaps-fc92d89a-e8da-4b03-9642-70e36d4be1ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036368677s Mar 21 22:02:21.611: INFO: Pod "pod-configmaps-fc92d89a-e8da-4b03-9642-70e36d4be1ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040640679s STEP: Saw pod success Mar 21 22:02:21.611: INFO: Pod "pod-configmaps-fc92d89a-e8da-4b03-9642-70e36d4be1ab" satisfied condition "success or failure" Mar 21 22:02:21.613: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-fc92d89a-e8da-4b03-9642-70e36d4be1ab container configmap-volume-test: STEP: delete the pod Mar 21 22:02:21.635: INFO: Waiting for pod pod-configmaps-fc92d89a-e8da-4b03-9642-70e36d4be1ab to disappear Mar 21 22:02:21.639: INFO: Pod pod-configmaps-fc92d89a-e8da-4b03-9642-70e36d4be1ab no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:02:21.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6178" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2871,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:02:21.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 21 22:02:21.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9324' Mar 21 22:02:22.028: INFO: stderr: "" Mar 21 22:02:22.028: INFO: stdout: "pod/pause created\n" Mar 21 22:02:22.028: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 21 22:02:22.028: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9324" to be "running and ready" Mar 21 22:02:22.043: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.925675ms Mar 21 22:02:24.049: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021010288s Mar 21 22:02:26.053: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.025282588s Mar 21 22:02:26.053: INFO: Pod "pause" satisfied condition "running and ready" Mar 21 22:02:26.053: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 21 22:02:26.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9324' Mar 21 22:02:26.148: INFO: stderr: "" Mar 21 22:02:26.148: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 21 22:02:26.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9324' Mar 21 22:02:26.247: INFO: stderr: "" Mar 21 22:02:26.247: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 21 22:02:26.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9324' Mar 21 22:02:26.360: INFO: stderr: "" Mar 21 22:02:26.360: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 21 22:02:26.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9324' Mar 21 22:02:26.486: INFO: stderr: "" Mar 21 22:02:26.486: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 21 22:02:26.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9324' Mar 21 22:02:26.632: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 22:02:26.632: INFO: stdout: "pod \"pause\" force deleted\n" Mar 21 22:02:26.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9324' Mar 21 22:02:26.744: INFO: stderr: "No resources found in kubectl-9324 namespace.\n" Mar 21 22:02:26.745: INFO: stdout: "" Mar 21 22:02:26.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9324 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 22:02:26.835: INFO: stderr: "" Mar 21 22:02:26.835: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:02:26.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9324" for this suite. • [SLOW TEST:5.313 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1379 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":184,"skipped":2893,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:02:26.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8638, will wait for the garbage collector to delete the pods Mar 21 22:02:33.112: INFO: Deleting Job.batch foo took: 6.873238ms Mar 21 22:02:33.412: INFO: Terminating Job.batch foo pods took: 300.268083ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:03:09.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8638" for this suite. • [SLOW TEST:42.365 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":185,"skipped":2901,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:03:09.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-7ab21e7c-e5fb-45a4-baf2-0b4ccf7998b7 STEP: Creating a pod to test consume secrets Mar 21 22:03:09.412: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-705ca244-74c1-4774-83d3-9c563cfc1e26" in namespace "projected-2998" to be "success or failure" Mar 21 22:03:09.419: INFO: Pod "pod-projected-secrets-705ca244-74c1-4774-83d3-9c563cfc1e26": Phase="Pending", Reason="", readiness=false. Elapsed: 7.52123ms Mar 21 22:03:11.423: INFO: Pod "pod-projected-secrets-705ca244-74c1-4774-83d3-9c563cfc1e26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011201336s Mar 21 22:03:13.427: INFO: Pod "pod-projected-secrets-705ca244-74c1-4774-83d3-9c563cfc1e26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015258216s STEP: Saw pod success Mar 21 22:03:13.427: INFO: Pod "pod-projected-secrets-705ca244-74c1-4774-83d3-9c563cfc1e26" satisfied condition "success or failure" Mar 21 22:03:13.430: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-705ca244-74c1-4774-83d3-9c563cfc1e26 container projected-secret-volume-test: STEP: delete the pod Mar 21 22:03:13.451: INFO: Waiting for pod pod-projected-secrets-705ca244-74c1-4774-83d3-9c563cfc1e26 to disappear Mar 21 22:03:13.455: INFO: Pod pod-projected-secrets-705ca244-74c1-4774-83d3-9c563cfc1e26 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:03:13.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2998" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2904,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:03:13.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9565/configmap-test-f805ef03-e277-43f0-a284-f25e26ae3951 STEP: Creating a pod to test consume configMaps Mar 21 22:03:13.628: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d2cf3cb-db5d-4b89-bf7a-046934c61e4a" in namespace "configmap-9565" to be "success or failure" Mar 21 22:03:13.634: INFO: Pod "pod-configmaps-1d2cf3cb-db5d-4b89-bf7a-046934c61e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.630183ms Mar 21 22:03:15.639: INFO: Pod "pod-configmaps-1d2cf3cb-db5d-4b89-bf7a-046934c61e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010582697s Mar 21 22:03:17.643: INFO: Pod "pod-configmaps-1d2cf3cb-db5d-4b89-bf7a-046934c61e4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014736824s STEP: Saw pod success Mar 21 22:03:17.643: INFO: Pod "pod-configmaps-1d2cf3cb-db5d-4b89-bf7a-046934c61e4a" satisfied condition "success or failure" Mar 21 22:03:17.646: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1d2cf3cb-db5d-4b89-bf7a-046934c61e4a container env-test: STEP: delete the pod Mar 21 22:03:17.665: INFO: Waiting for pod pod-configmaps-1d2cf3cb-db5d-4b89-bf7a-046934c61e4a to disappear Mar 21 22:03:17.709: INFO: Pod pod-configmaps-1d2cf3cb-db5d-4b89-bf7a-046934c61e4a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:03:17.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9565" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:03:17.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 21 22:03:17.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3527 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 21 22:03:17.853: INFO: stderr: "" Mar 21 22:03:17.853: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 21 22:03:17.853: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 21 22:03:17.854: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3527" to be "running and ready, or succeeded" Mar 21 22:03:17.856: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665333ms Mar 21 22:03:19.860: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006736946s Mar 21 22:03:21.864: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.010912735s Mar 21 22:03:21.865: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 21 22:03:21.865: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 21 22:03:21.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3527' Mar 21 22:03:21.967: INFO: stderr: "" Mar 21 22:03:21.967: INFO: stdout: "I0321 22:03:20.144739 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/c85 519\nI0321 22:03:20.344957 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/f4cj 300\nI0321 22:03:20.544921 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/8rgc 502\nI0321 22:03:20.744998 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/g2b 548\nI0321 22:03:20.944933 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/f7s 433\nI0321 22:03:21.144992 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/58n 336\nI0321 22:03:21.344980 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/lhqr 294\nI0321 22:03:21.544935 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/fc7c 304\nI0321 22:03:21.744961 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/2sp 270\nI0321 22:03:21.944941 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/7jfq 291\n" STEP: limiting log lines Mar 21 22:03:21.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3527 --tail=1' Mar 21 22:03:22.071: INFO: stderr: "" Mar 21 22:03:22.071: INFO: stdout: "I0321 22:03:21.944941 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/7jfq 291\n" Mar 21 22:03:22.072: INFO: got output "I0321 22:03:21.944941 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/7jfq 291\n" STEP: limiting log bytes Mar 21 22:03:22.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3527 --limit-bytes=1' Mar 21 22:03:22.171: INFO: stderr: "" Mar 21 22:03:22.171: INFO: stdout: "I" Mar 21 22:03:22.171: INFO: got output "I" STEP: exposing timestamps Mar 21 22:03:22.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3527 --tail=1 --timestamps' Mar 21 22:03:22.270: INFO: stderr: "" Mar 21 22:03:22.270: INFO: stdout: "2020-03-21T22:03:22.145054347Z I0321 22:03:22.144891 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/rvss 417\n" Mar 21 22:03:22.270: INFO: got output "2020-03-21T22:03:22.145054347Z I0321 22:03:22.144891 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/rvss 417\n" STEP: restricting to a time range Mar 21 22:03:24.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3527 --since=1s' Mar 21 22:03:24.868: INFO: stderr: "" Mar 21 22:03:24.868: INFO: stdout: "I0321 22:03:23.944968 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/w8l 525\nI0321 22:03:24.144936 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/xjfq 268\nI0321 22:03:24.345027 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/8jb5 532\nI0321 22:03:24.544939 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/w2b 467\nI0321 22:03:24.744949 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/qtx 376\n" Mar 21 22:03:24.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3527 --since=24h' Mar 21 22:03:24.965: INFO: stderr: "" Mar 21 22:03:24.965: INFO: stdout: "I0321 22:03:20.144739 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/c85 519\nI0321 22:03:20.344957 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/f4cj 300\nI0321 22:03:20.544921 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/8rgc 502\nI0321 22:03:20.744998 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/g2b 548\nI0321 22:03:20.944933 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/f7s 433\nI0321 22:03:21.144992 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/58n 336\nI0321 22:03:21.344980 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/lhqr 294\nI0321 22:03:21.544935 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/fc7c 304\nI0321 22:03:21.744961 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/2sp 270\nI0321 22:03:21.944941 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/7jfq 291\nI0321 22:03:22.144891 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/rvss 417\nI0321 22:03:22.344975 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/4bql 352\nI0321 22:03:22.544931 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/qrbs 393\nI0321 22:03:22.744912 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/76dx 499\nI0321 22:03:22.944949 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/n6cr 479\nI0321 22:03:23.144951 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/bwnd 283\nI0321 22:03:23.345008 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/hgcr 561\nI0321 22:03:23.544979 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/w7jp 314\nI0321 22:03:23.744997 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/nl9 326\nI0321 22:03:23.944968 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/w8l 525\nI0321 22:03:24.144936 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/xjfq 268\nI0321 22:03:24.345027 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/8jb5 532\nI0321 22:03:24.544939 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/w2b 467\nI0321 22:03:24.744949 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/qtx 376\nI0321 22:03:24.944889 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/4r4g 293\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 21 22:03:24.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3527' Mar 21 22:03:27.588: INFO: stderr: "" Mar 21 22:03:27.588: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:03:27.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3527" for this suite. • [SLOW TEST:9.880 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":188,"skipped":2942,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:03:27.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:03:27.655: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:03:28.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8769" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":189,"skipped":2951,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:03:28.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-6a5b78fe-46b4-42d8-8faa-5e7fc7fd73fa STEP: Creating secret with name s-test-opt-upd-b3321259-7fe8-44f9-a9b9-709b73392201 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6a5b78fe-46b4-42d8-8faa-5e7fc7fd73fa STEP: Updating secret s-test-opt-upd-b3321259-7fe8-44f9-a9b9-709b73392201 STEP: Creating secret with name s-test-opt-create-b7febdad-f273-46e2-a027-ae779ca0af23 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:03:38.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1602" for this suite. • [SLOW TEST:10.346 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2965,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:03:38.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9083 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 22:03:38.744: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 21 22:04:00.868: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.22:8080/dial?request=hostname&protocol=udp&host=10.244.1.215&port=8081&tries=1'] Namespace:pod-network-test-9083 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:04:00.868: INFO: >>> kubeConfig: /root/.kube/config I0321 22:04:00.900370 6 log.go:172] (0xc000f86bb0) (0xc002350640) Create stream I0321 22:04:00.900410 6 log.go:172] (0xc000f86bb0) (0xc002350640) Stream added, broadcasting: 1 I0321 22:04:00.902934 6 log.go:172] (0xc000f86bb0) Reply frame received for 1 I0321 22:04:00.902977 6 log.go:172] (0xc000f86bb0) (0xc0003f0640) Create stream I0321 22:04:00.902996 6 log.go:172] (0xc000f86bb0) (0xc0003f0640) Stream added, broadcasting: 3 I0321 22:04:00.903953 6 log.go:172] (0xc000f86bb0) Reply frame received for 3 I0321 22:04:00.903991 6 log.go:172] (0xc000f86bb0) (0xc002350780) Create stream I0321 22:04:00.904006 6 log.go:172] (0xc000f86bb0) (0xc002350780) Stream added, broadcasting: 5 I0321 22:04:00.905020 6 log.go:172] (0xc000f86bb0) Reply frame received for 5 I0321 22:04:00.999799 6 log.go:172] (0xc000f86bb0) Data frame received for 3 I0321 22:04:00.999837 6 log.go:172] (0xc0003f0640) (3) Data frame handling I0321 22:04:00.999865 6 log.go:172] (0xc0003f0640) (3) Data frame sent I0321 22:04:01.000728 6 log.go:172] (0xc000f86bb0) Data frame received for 3 I0321 22:04:01.000760 6 log.go:172] (0xc0003f0640) (3) Data frame handling I0321 22:04:01.001317 6 log.go:172] (0xc000f86bb0) Data frame received for 5 I0321 22:04:01.001346 6 log.go:172] (0xc002350780) (5) Data frame handling I0321 22:04:01.003201 6 log.go:172] (0xc000f86bb0) Data frame received for 1 I0321 22:04:01.003236 6 log.go:172] (0xc002350640) (1) Data frame handling I0321 22:04:01.003267 6 log.go:172] (0xc002350640) (1) Data frame sent I0321 22:04:01.003289 6 log.go:172] (0xc000f86bb0) (0xc002350640) Stream removed, broadcasting: 1 I0321 22:04:01.003348 6 log.go:172] (0xc000f86bb0) Go away received I0321 22:04:01.003400 6 log.go:172] (0xc000f86bb0) (0xc002350640) Stream removed, broadcasting: 1 I0321 22:04:01.003419 6 log.go:172] (0xc000f86bb0) (0xc0003f0640) Stream removed, broadcasting: 3 I0321 22:04:01.003430 6 log.go:172] (0xc000f86bb0) (0xc002350780) Stream removed, broadcasting: 5 Mar 21 22:04:01.003: INFO: Waiting for responses: map[] Mar 21 22:04:01.006: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.22:8080/dial?request=hostname&protocol=udp&host=10.244.2.21&port=8081&tries=1'] Namespace:pod-network-test-9083 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:04:01.006: INFO: >>> kubeConfig: /root/.kube/config I0321 22:04:01.040970 6 log.go:172] (0xc000f871e0) (0xc002350b40) Create stream I0321 22:04:01.040994 6 log.go:172] (0xc000f871e0) (0xc002350b40) Stream added, broadcasting: 1 I0321 22:04:01.043452 6 log.go:172] (0xc000f871e0) Reply frame received for 1 I0321 22:04:01.043497 6 log.go:172] (0xc000f871e0) (0xc0024f4640) Create stream I0321 22:04:01.043514 6 log.go:172] (0xc000f871e0) (0xc0024f4640) Stream added, broadcasting: 3 I0321 22:04:01.044847 6 log.go:172] (0xc000f871e0) Reply frame received for 3 I0321 22:04:01.044888 6 log.go:172] (0xc000f871e0) (0xc001e67e00) Create stream I0321 22:04:01.044902 6 log.go:172] (0xc000f871e0) (0xc001e67e00) Stream added, broadcasting: 5 I0321 22:04:01.046302 6 log.go:172] (0xc000f871e0) Reply frame received for 5 I0321 22:04:01.103864 6 log.go:172] (0xc000f871e0) Data frame received for 3 I0321 22:04:01.103901 6 log.go:172] (0xc0024f4640) (3) Data frame handling I0321 22:04:01.103927 6 log.go:172] (0xc0024f4640) (3) Data frame sent I0321 22:04:01.104430 6 log.go:172] (0xc000f871e0) Data frame received for 3 I0321 22:04:01.104465 6 log.go:172] (0xc0024f4640) (3) Data frame handling I0321 22:04:01.104536 6 log.go:172] (0xc000f871e0) Data frame received for 5 I0321 22:04:01.104574 6 log.go:172] (0xc001e67e00) (5) Data frame handling I0321 22:04:01.106345 6 log.go:172] (0xc000f871e0) Data frame received for 1 I0321 22:04:01.106380 6 log.go:172] (0xc002350b40) (1) Data frame handling I0321 22:04:01.106397 6 log.go:172] (0xc002350b40) (1) Data frame sent I0321 22:04:01.106415 6 log.go:172] (0xc000f871e0) (0xc002350b40) Stream removed, broadcasting: 1 I0321 22:04:01.106441 6 log.go:172] (0xc000f871e0) Go away received I0321 22:04:01.106542 6 log.go:172] (0xc000f871e0) (0xc002350b40) Stream removed, broadcasting: 1 I0321 22:04:01.106564 6 log.go:172] (0xc000f871e0) (0xc0024f4640) Stream removed, broadcasting: 3 I0321 22:04:01.106592 6 log.go:172] (0xc000f871e0) (0xc001e67e00) Stream removed, broadcasting: 5 Mar 21 22:04:01.106: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:04:01.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9083" for this suite. • [SLOW TEST:22.508 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":2975,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:04:01.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:04:01.173: INFO: Creating deployment "test-recreate-deployment" Mar 21 22:04:01.206: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 21 22:04:01.256: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 21 22:04:03.262: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 21 22:04:03.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425041, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425041, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425041, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425041, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 22:04:05.269: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 21 22:04:05.276: INFO: Updating deployment test-recreate-deployment Mar 21 22:04:05.276: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 21 22:04:05.750: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7195 /apis/apps/v1/namespaces/deployment-7195/deployments/test-recreate-deployment 8711de1c-9e5b-4834-adbe-b0cc581f571e 1659521 2 2020-03-21 22:04:01 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0042f8838 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-21 22:04:05 +0000 UTC,LastTransitionTime:2020-03-21 22:04:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-21 22:04:05 +0000 UTC,LastTransitionTime:2020-03-21 22:04:01 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 21 22:04:05.754: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7195 /apis/apps/v1/namespaces/deployment-7195/replicasets/test-recreate-deployment-5f94c574ff dec7aef7-6bf7-49e3-bc91-7abe36402095 1659519 1 2020-03-21 22:04:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 8711de1c-9e5b-4834-adbe-b0cc581f571e 0xc0042f8bd7 0xc0042f8bd8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0042f8c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 22:04:05.754: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 21 22:04:05.754: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-7195 /apis/apps/v1/namespaces/deployment-7195/replicasets/test-recreate-deployment-799c574856 95f11279-2006-4e95-9491-2bbe8a045d13 1659510 2 2020-03-21 22:04:01 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 8711de1c-9e5b-4834-adbe-b0cc581f571e 0xc0042f8ca7 0xc0042f8ca8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0042f8d18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 22:04:05.767: INFO: Pod "test-recreate-deployment-5f94c574ff-rnsz9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-rnsz9 test-recreate-deployment-5f94c574ff- deployment-7195 /api/v1/namespaces/deployment-7195/pods/test-recreate-deployment-5f94c574ff-rnsz9 7e9b3a42-28f3-4b7e-9ff2-443810c937bd 1659523 0 2020-03-21 22:04:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff dec7aef7-6bf7-49e3-bc91-7abe36402095 0xc0054671d7 0xc0054671d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wddrq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wddrq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wddrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-21 22:04:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:04:05.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7195" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":192,"skipped":2995,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:04:05.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 22:04:05.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1" in namespace "projected-5062" to be "success or failure" Mar 21 22:04:05.881: INFO: Pod "downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.251964ms Mar 21 22:04:07.967: INFO: Pod "downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089784701s Mar 21 22:04:09.985: INFO: Pod "downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107882365s Mar 21 22:04:12.213: INFO: Pod "downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.3357392s STEP: Saw pod success Mar 21 22:04:12.213: INFO: Pod "downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1" satisfied condition "success or failure" Mar 21 22:04:12.216: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1 container client-container: STEP: delete the pod Mar 21 22:04:12.280: INFO: Waiting for pod downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1 to disappear Mar 21 22:04:12.301: INFO: Pod downwardapi-volume-38fa247d-2055-4330-bd1e-df4316659db1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:04:12.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5062" for this suite. • [SLOW TEST:6.534 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":2996,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:04:12.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:04:12.515: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 21 22:04:17.523: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 21 22:04:17.523: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 21 22:04:17.674: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3838 /apis/apps/v1/namespaces/deployment-3838/deployments/test-cleanup-deployment 3891c14a-aa59-4197-bf57-9bd7b9dea70b 1659648 1 2020-03-21 22:04:17 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0051da678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 21 22:04:17.764: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3838 /apis/apps/v1/namespaces/deployment-3838/replicasets/test-cleanup-deployment-55ffc6b7b6 4676c039-71e4-40cf-8f5d-0219424131af 1659657 1 2020-03-21 22:04:17 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 3891c14a-aa59-4197-bf57-9bd7b9dea70b 0xc0051daa97 0xc0051daa98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0051dab08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 22:04:17.764: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 21 22:04:17.764: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3838 /apis/apps/v1/namespaces/deployment-3838/replicasets/test-cleanup-controller ce7c3154-865f-4876-b980-04ca0eb4caf9 1659650 1 2020-03-21 22:04:12 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 3891c14a-aa59-4197-bf57-9bd7b9dea70b 0xc0051da9af 0xc0051da9c0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0051daa28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 21 22:04:17.840: INFO: Pod "test-cleanup-controller-glxtv" is available: &Pod{ObjectMeta:{test-cleanup-controller-glxtv test-cleanup-controller- deployment-3838 /api/v1/namespaces/deployment-3838/pods/test-cleanup-controller-glxtv 2b31958a-7188-4a96-87b5-aea6d6e16d41 1659635 0 2020-03-21 22:04:12 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller ce7c3154-865f-4876-b980-04ca0eb4caf9 0xc0042a06a7 0xc0042a06a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mdsl4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mdsl4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mdsl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.217,StartTime:2020-03-21 22:04:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:04:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f92cb46b33c9e519722190f4cccb53f06aa635189d6522cc857539a54767b957,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:04:17.841: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-rbxpq" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-rbxpq test-cleanup-deployment-55ffc6b7b6- deployment-3838 /api/v1/namespaces/deployment-3838/pods/test-cleanup-deployment-55ffc6b7b6-rbxpq fc8f4f60-9a31-428e-b6c2-c606c5e3fb3b 1659655 0 2020-03-21 22:04:17 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 4676c039-71e4-40cf-8f5d-0219424131af 0xc0042a0847 0xc0042a0848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mdsl4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mdsl4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mdsl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:04:17.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3838" for this suite. • [SLOW TEST:5.582 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":194,"skipped":3035,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:04:17.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:04:17.985: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 21 22:04:18.048: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 21 22:04:23.052: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 21 22:04:23.052: INFO: Creating deployment "test-rolling-update-deployment" Mar 21 22:04:23.135: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 21 22:04:23.185: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 21 22:04:25.204: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 21 22:04:25.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425063, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425063, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425063, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425063, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 22:04:27.212: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 21 22:04:27.222: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8242 /apis/apps/v1/namespaces/deployment-8242/deployments/test-rolling-update-deployment e7b37cfe-51bf-4754-bbc5-7721a053541f 1659763 1 2020-03-21 22:04:23 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f06bc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-21 22:04:23 +0000 UTC,LastTransitionTime:2020-03-21 22:04:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-21 22:04:25 +0000 UTC,LastTransitionTime:2020-03-21 22:04:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 21 22:04:27.225: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-8242 /apis/apps/v1/namespaces/deployment-8242/replicasets/test-rolling-update-deployment-67cf4f6444 43fba396-75cd-4bde-8c51-18f3e42f2ecf 1659752 1 2020-03-21 22:04:23 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment e7b37cfe-51bf-4754-bbc5-7721a053541f 0xc002f07367 0xc002f07368}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f073d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 21 22:04:27.225: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 21 22:04:27.226: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8242 /apis/apps/v1/namespaces/deployment-8242/replicasets/test-rolling-update-controller be268d3f-cb86-491c-b04d-6bfbc8e5ab78 1659761 2 2020-03-21 22:04:17 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment e7b37cfe-51bf-4754-bbc5-7721a053541f 0xc002f07297 0xc002f07298}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002f072f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 22:04:27.229: INFO: Pod "test-rolling-update-deployment-67cf4f6444-8rfv2" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-8rfv2 test-rolling-update-deployment-67cf4f6444- deployment-8242 /api/v1/namespaces/deployment-8242/pods/test-rolling-update-deployment-67cf4f6444-8rfv2 a19044e6-27e8-4fde-aae2-944a3ec26ff8 1659751 0 2020-03-21 22:04:23 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 43fba396-75cd-4bde-8c51-18f3e42f2ecf 0xc002f07837 0xc002f07838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75qlf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75qlf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75qlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:04:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.218,StartTime:2020-03-21 22:04:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:04:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://dc423b4f585a6025acbb9a4bce597527753ffc9e89776c86c74b10fb12448842,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:04:27.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8242" for this suite. • [SLOW TEST:9.345 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":195,"skipped":3043,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:04:27.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:04:33.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9495" for this suite. STEP: Destroying namespace "nsdeletetest-1828" for this suite. Mar 21 22:04:33.524: INFO: Namespace nsdeletetest-1828 was already deleted STEP: Destroying namespace "nsdeletetest-9495" for this suite. • [SLOW TEST:6.291 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":196,"skipped":3046,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:04:33.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 22:04:34.255: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 22:04:36.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425074, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425074, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425074, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425074, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 22:04:39.881: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 21 22:04:39.943: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:04:39.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4556" for this suite. STEP: Destroying namespace "webhook-4556-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.520 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":197,"skipped":3046,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:04:40.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-560 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 22:04:40.098: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 21 22:05:02.178: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.220 8081 | grep -v '^\s*$'] Namespace:pod-network-test-560 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:05:02.178: INFO: >>> kubeConfig: /root/.kube/config I0321 22:05:02.215203 6 log.go:172] (0xc000db4370) (0xc001ac8fa0) Create stream I0321 22:05:02.215235 6 log.go:172] (0xc000db4370) (0xc001ac8fa0) Stream added, broadcasting: 1 I0321 22:05:02.217090 6 log.go:172] (0xc000db4370) Reply frame received for 1 I0321 22:05:02.217279 6 log.go:172] (0xc000db4370) (0xc001fe3400) Create stream I0321 22:05:02.217300 6 log.go:172] (0xc000db4370) (0xc001fe3400) Stream added, broadcasting: 3 I0321 22:05:02.218233 6 log.go:172] (0xc000db4370) Reply frame received for 3 I0321 22:05:02.218271 6 log.go:172] (0xc000db4370) (0xc001fe3540) Create stream I0321 22:05:02.218278 6 log.go:172] (0xc000db4370) (0xc001fe3540) Stream added, broadcasting: 5 I0321 22:05:02.218926 6 log.go:172] (0xc000db4370) Reply frame received for 5 I0321 22:05:03.320728 6 log.go:172] (0xc000db4370) Data frame received for 3 I0321 22:05:03.320767 6 log.go:172] (0xc001fe3400) (3) Data frame handling I0321 22:05:03.320790 6 log.go:172] (0xc001fe3400) (3) Data frame sent I0321 22:05:03.321087 6 log.go:172] (0xc000db4370) Data frame received for 5 I0321 22:05:03.321268 6 log.go:172] (0xc001fe3540) (5) Data frame handling I0321 22:05:03.321304 6 log.go:172] (0xc000db4370) Data frame received for 3 I0321 22:05:03.321324 6 log.go:172] (0xc001fe3400) (3) Data frame handling I0321 22:05:03.323484 6 log.go:172] (0xc000db4370) Data frame received for 1 I0321 22:05:03.323505 6 log.go:172] (0xc001ac8fa0) (1) Data frame handling I0321 22:05:03.323518 6 log.go:172] (0xc001ac8fa0) (1) Data frame sent I0321 22:05:03.323538 6 log.go:172] (0xc000db4370) (0xc001ac8fa0) Stream removed, broadcasting: 1 I0321 22:05:03.323555 6 log.go:172] (0xc000db4370) Go away received I0321 22:05:03.323662 6 log.go:172] (0xc000db4370) (0xc001ac8fa0) Stream removed, broadcasting: 1 I0321 22:05:03.323688 6 log.go:172] (0xc000db4370) (0xc001fe3400) Stream removed, broadcasting: 3 I0321 22:05:03.323707 6 log.go:172] (0xc000db4370) (0xc001fe3540) Stream removed, broadcasting: 5 Mar 21 22:05:03.323: INFO: Found all expected endpoints: [netserver-0] Mar 21 22:05:03.339: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.27 8081 | grep -v '^\s*$'] Namespace:pod-network-test-560 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:05:03.339: INFO: >>> kubeConfig: /root/.kube/config I0321 22:05:03.377401 6 log.go:172] (0xc002a146e0) (0xc001fdee60) Create stream I0321 22:05:03.377431 6 log.go:172] (0xc002a146e0) (0xc001fdee60) Stream added, broadcasting: 1 I0321 22:05:03.379616 6 log.go:172] (0xc002a146e0) Reply frame received for 1 I0321 22:05:03.379650 6 log.go:172] (0xc002a146e0) (0xc001458140) Create stream I0321 22:05:03.379664 6 log.go:172] (0xc002a146e0) (0xc001458140) Stream added, broadcasting: 3 I0321 22:05:03.380708 6 log.go:172] (0xc002a146e0) Reply frame received for 3 I0321 22:05:03.380741 6 log.go:172] (0xc002a146e0) (0xc001fe37c0) Create stream I0321 22:05:03.380754 6 log.go:172] (0xc002a146e0) (0xc001fe37c0) Stream added, broadcasting: 5 I0321 22:05:03.381829 6 log.go:172] (0xc002a146e0) Reply frame received for 5 I0321 22:05:04.447729 6 log.go:172] (0xc002a146e0) Data frame received for 3 I0321 22:05:04.447785 6 log.go:172] (0xc001458140) (3) Data frame handling I0321 22:05:04.447820 6 log.go:172] (0xc001458140) (3) Data frame sent I0321 22:05:04.447898 6 log.go:172] (0xc002a146e0) Data frame received for 5 I0321 22:05:04.447939 6 log.go:172] (0xc001fe37c0) (5) Data frame handling I0321 22:05:04.448057 6 log.go:172] (0xc002a146e0) Data frame received for 3 I0321 22:05:04.448097 6 log.go:172] (0xc001458140) (3) Data frame handling I0321 22:05:04.450418 6 log.go:172] (0xc002a146e0) Data frame received for 1 I0321 22:05:04.450451 6 log.go:172] (0xc001fdee60) (1) Data frame handling I0321 22:05:04.450487 6 log.go:172] (0xc001fdee60) (1) Data frame sent I0321 22:05:04.450512 6 log.go:172] (0xc002a146e0) (0xc001fdee60) Stream removed, broadcasting: 1 I0321 22:05:04.450634 6 log.go:172] (0xc002a146e0) (0xc001fdee60) Stream removed, broadcasting: 1 I0321 22:05:04.450672 6 log.go:172] (0xc002a146e0) Go away received I0321 22:05:04.450724 6 log.go:172] (0xc002a146e0) (0xc001458140) Stream removed, broadcasting: 3 I0321 22:05:04.450756 6 log.go:172] (0xc002a146e0) (0xc001fe37c0) Stream removed, broadcasting: 5 Mar 21 22:05:04.450: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:04.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-560" for this suite. • [SLOW TEST:24.412 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3052,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:04.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 21 22:05:04.532: INFO: Waiting up to 5m0s for pod "pod-b6114b16-b3af-4398-90b1-c86399a173cd" in namespace "emptydir-8331" to be "success or failure" Mar 21 22:05:04.536: INFO: Pod "pod-b6114b16-b3af-4398-90b1-c86399a173cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213663ms Mar 21 22:05:06.540: INFO: Pod "pod-b6114b16-b3af-4398-90b1-c86399a173cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008213113s Mar 21 22:05:08.545: INFO: Pod "pod-b6114b16-b3af-4398-90b1-c86399a173cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013013033s STEP: Saw pod success Mar 21 22:05:08.545: INFO: Pod "pod-b6114b16-b3af-4398-90b1-c86399a173cd" satisfied condition "success or failure" Mar 21 22:05:08.548: INFO: Trying to get logs from node jerma-worker pod pod-b6114b16-b3af-4398-90b1-c86399a173cd container test-container: STEP: delete the pod Mar 21 22:05:08.568: INFO: Waiting for pod pod-b6114b16-b3af-4398-90b1-c86399a173cd to disappear Mar 21 22:05:08.573: INFO: Pod pod-b6114b16-b3af-4398-90b1-c86399a173cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:08.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8331" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3058,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:08.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-3aae3d9c-b448-428c-b873-ef471da2e911 STEP: Creating a pod to test consume secrets Mar 21 22:05:08.675: INFO: Waiting up to 5m0s for pod "pod-secrets-31fe7e81-e94f-4d68-a39a-a3c8adf8efee" in namespace "secrets-1075" to be "success or failure" Mar 21 22:05:08.698: INFO: Pod "pod-secrets-31fe7e81-e94f-4d68-a39a-a3c8adf8efee": Phase="Pending", Reason="", readiness=false. Elapsed: 22.485431ms Mar 21 22:05:10.764: INFO: Pod "pod-secrets-31fe7e81-e94f-4d68-a39a-a3c8adf8efee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089249684s Mar 21 22:05:12.769: INFO: Pod "pod-secrets-31fe7e81-e94f-4d68-a39a-a3c8adf8efee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093515118s STEP: Saw pod success Mar 21 22:05:12.769: INFO: Pod "pod-secrets-31fe7e81-e94f-4d68-a39a-a3c8adf8efee" satisfied condition "success or failure" Mar 21 22:05:12.772: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-31fe7e81-e94f-4d68-a39a-a3c8adf8efee container secret-volume-test: STEP: delete the pod Mar 21 22:05:12.804: INFO: Waiting for pod pod-secrets-31fe7e81-e94f-4d68-a39a-a3c8adf8efee to disappear Mar 21 22:05:12.825: INFO: Pod pod-secrets-31fe7e81-e94f-4d68-a39a-a3c8adf8efee no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:12.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1075" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3069,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:12.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e90849ee-ae3d-4f0e-93bc-63329b6fe640 STEP: Creating a pod to test consume configMaps Mar 21 22:05:12.916: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd28b2e5-3831-4c6e-902e-d1dc94e762ce" in namespace "configmap-9524" to be "success or failure" Mar 21 22:05:12.974: INFO: Pod "pod-configmaps-bd28b2e5-3831-4c6e-902e-d1dc94e762ce": Phase="Pending", Reason="", readiness=false. Elapsed: 57.228265ms Mar 21 22:05:14.978: INFO: Pod "pod-configmaps-bd28b2e5-3831-4c6e-902e-d1dc94e762ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061510844s Mar 21 22:05:16.982: INFO: Pod "pod-configmaps-bd28b2e5-3831-4c6e-902e-d1dc94e762ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065103201s STEP: Saw pod success Mar 21 22:05:16.982: INFO: Pod "pod-configmaps-bd28b2e5-3831-4c6e-902e-d1dc94e762ce" satisfied condition "success or failure" Mar 21 22:05:16.985: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-bd28b2e5-3831-4c6e-902e-d1dc94e762ce container configmap-volume-test: STEP: delete the pod Mar 21 22:05:17.113: INFO: Waiting for pod pod-configmaps-bd28b2e5-3831-4c6e-902e-d1dc94e762ce to disappear Mar 21 22:05:17.141: INFO: Pod pod-configmaps-bd28b2e5-3831-4c6e-902e-d1dc94e762ce no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:17.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9524" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:17.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:05:17.290: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:21.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5435" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:21.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 21 22:05:28.030: INFO: 0 pods remaining Mar 21 22:05:28.030: INFO: 0 pods has nil DeletionTimestamp Mar 21 22:05:28.030: INFO: Mar 21 22:05:28.824: INFO: 0 pods remaining Mar 21 22:05:28.824: INFO: 0 pods has nil DeletionTimestamp Mar 21 22:05:28.824: INFO: STEP: Gathering metrics W0321 22:05:29.465551 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 22:05:29.465: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:29.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9291" for this suite. • [SLOW TEST:8.127 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":203,"skipped":3147,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:29.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 21 22:05:29.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8347' Mar 21 22:05:30.013: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 22:05:30.013: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 21 22:05:30.059: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-78c8g] Mar 21 22:05:30.059: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-78c8g" in namespace "kubectl-8347" to be "running and ready" Mar 21 22:05:30.061: INFO: Pod "e2e-test-httpd-rc-78c8g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299467ms Mar 21 22:05:32.070: INFO: Pod "e2e-test-httpd-rc-78c8g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011042503s Mar 21 22:05:34.074: INFO: Pod "e2e-test-httpd-rc-78c8g": Phase="Running", Reason="", readiness=true. Elapsed: 4.015120431s Mar 21 22:05:34.074: INFO: Pod "e2e-test-httpd-rc-78c8g" satisfied condition "running and ready" Mar 21 22:05:34.074: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-78c8g] Mar 21 22:05:34.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8347' Mar 21 22:05:34.184: INFO: stderr: "" Mar 21 22:05:34.184: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.228. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.228. Set the 'ServerName' directive globally to suppress this message\n[Sat Mar 21 22:05:32.228448 2020] [mpm_event:notice] [pid 1:tid 140542628637544] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Mar 21 22:05:32.228501 2020] [core:notice] [pid 1:tid 140542628637544] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 21 22:05:34.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8347' Mar 21 22:05:34.294: INFO: stderr: "" Mar 21 22:05:34.295: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:34.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8347" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":204,"skipped":3149,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:34.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 21 22:05:34.384: INFO: Waiting up to 5m0s for pod "downward-api-109828b8-3201-4909-a38e-00259d36901f" in namespace "downward-api-6700" to be "success or failure" Mar 21 22:05:34.397: INFO: Pod "downward-api-109828b8-3201-4909-a38e-00259d36901f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.810532ms Mar 21 22:05:36.401: INFO: Pod "downward-api-109828b8-3201-4909-a38e-00259d36901f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017055783s Mar 21 22:05:38.405: INFO: Pod "downward-api-109828b8-3201-4909-a38e-00259d36901f": Phase="Running", Reason="", readiness=true. Elapsed: 4.021474672s Mar 21 22:05:40.410: INFO: Pod "downward-api-109828b8-3201-4909-a38e-00259d36901f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026157444s STEP: Saw pod success Mar 21 22:05:40.410: INFO: Pod "downward-api-109828b8-3201-4909-a38e-00259d36901f" satisfied condition "success or failure" Mar 21 22:05:40.413: INFO: Trying to get logs from node jerma-worker2 pod downward-api-109828b8-3201-4909-a38e-00259d36901f container dapi-container: STEP: delete the pod Mar 21 22:05:40.433: INFO: Waiting for pod downward-api-109828b8-3201-4909-a38e-00259d36901f to disappear Mar 21 22:05:40.446: INFO: Pod downward-api-109828b8-3201-4909-a38e-00259d36901f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:40.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6700" for this suite. • [SLOW TEST:6.187 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:40.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 21 22:05:40.539: INFO: Waiting up to 5m0s for pod "pod-4f012d8f-7c24-47e9-a522-1a9214c6f24b" in namespace "emptydir-924" to be "success or failure" Mar 21 22:05:40.550: INFO: Pod "pod-4f012d8f-7c24-47e9-a522-1a9214c6f24b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.361905ms Mar 21 22:05:42.554: INFO: Pod "pod-4f012d8f-7c24-47e9-a522-1a9214c6f24b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01500736s Mar 21 22:05:44.558: INFO: Pod "pod-4f012d8f-7c24-47e9-a522-1a9214c6f24b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019054124s STEP: Saw pod success Mar 21 22:05:44.558: INFO: Pod "pod-4f012d8f-7c24-47e9-a522-1a9214c6f24b" satisfied condition "success or failure" Mar 21 22:05:44.561: INFO: Trying to get logs from node jerma-worker2 pod pod-4f012d8f-7c24-47e9-a522-1a9214c6f24b container test-container: STEP: delete the pod Mar 21 22:05:44.599: INFO: Waiting for pod pod-4f012d8f-7c24-47e9-a522-1a9214c6f24b to disappear Mar 21 22:05:44.610: INFO: Pod pod-4f012d8f-7c24-47e9-a522-1a9214c6f24b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:44.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-924" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3194,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:44.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 21 22:05:44.694: INFO: Created pod &Pod{ObjectMeta:{dns-5420 dns-5420 /api/v1/namespaces/dns-5420/pods/dns-5420 f9566826-7b0a-4cf4-91f8-4a8d63e40e88 1660508 0 2020-03-21 22:05:44 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7flqc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7flqc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7flqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 21 22:05:48.703: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5420 PodName:dns-5420 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:05:48.703: INFO: >>> kubeConfig: /root/.kube/config I0321 22:05:48.738094 6 log.go:172] (0xc002297550) (0xc0012eedc0) Create stream I0321 22:05:48.738120 6 log.go:172] (0xc002297550) (0xc0012eedc0) Stream added, broadcasting: 1 I0321 22:05:48.740463 6 log.go:172] (0xc002297550) Reply frame received for 1 I0321 22:05:48.740513 6 log.go:172] (0xc002297550) (0xc0024f4dc0) Create stream I0321 22:05:48.740530 6 log.go:172] (0xc002297550) (0xc0024f4dc0) Stream added, broadcasting: 3 I0321 22:05:48.741825 6 log.go:172] (0xc002297550) Reply frame received for 3 I0321 22:05:48.741850 6 log.go:172] (0xc002297550) (0xc001ac9040) Create stream I0321 22:05:48.741861 6 log.go:172] (0xc002297550) (0xc001ac9040) Stream added, broadcasting: 5 I0321 22:05:48.742767 6 log.go:172] (0xc002297550) Reply frame received for 5 I0321 22:05:48.852896 6 log.go:172] (0xc002297550) Data frame received for 3 I0321 22:05:48.852929 6 log.go:172] (0xc0024f4dc0) (3) Data frame handling I0321 22:05:48.852953 6 log.go:172] (0xc0024f4dc0) (3) Data frame sent I0321 22:05:48.853863 6 log.go:172] (0xc002297550) Data frame received for 5 I0321 22:05:48.853916 6 log.go:172] (0xc001ac9040) (5) Data frame handling I0321 22:05:48.853950 6 log.go:172] (0xc002297550) Data frame received for 3 I0321 22:05:48.853965 6 log.go:172] (0xc0024f4dc0) (3) Data frame handling I0321 22:05:48.856467 6 log.go:172] (0xc002297550) Data frame received for 1 I0321 22:05:48.856506 6 log.go:172] (0xc0012eedc0) (1) Data frame handling I0321 22:05:48.856522 6 log.go:172] (0xc0012eedc0) (1) Data frame sent I0321 22:05:48.856551 6 log.go:172] (0xc002297550) (0xc0012eedc0) Stream removed, broadcasting: 1 I0321 22:05:48.856583 6 log.go:172] (0xc002297550) Go away received I0321 22:05:48.856865 6 log.go:172] (0xc002297550) (0xc0012eedc0) Stream removed, broadcasting: 1 I0321 22:05:48.856926 6 log.go:172] (0xc002297550) (0xc0024f4dc0) Stream removed, broadcasting: 3 I0321 22:05:48.856945 6 log.go:172] (0xc002297550) (0xc001ac9040) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 21 22:05:48.856: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5420 PodName:dns-5420 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:05:48.857: INFO: >>> kubeConfig: /root/.kube/config I0321 22:05:48.891456 6 log.go:172] (0xc002b38630) (0xc0010e1040) Create stream I0321 22:05:48.891495 6 log.go:172] (0xc002b38630) (0xc0010e1040) Stream added, broadcasting: 1 I0321 22:05:48.896354 6 log.go:172] (0xc002b38630) Reply frame received for 1 I0321 22:05:48.896408 6 log.go:172] (0xc002b38630) (0xc0012eefa0) Create stream I0321 22:05:48.896432 6 log.go:172] (0xc002b38630) (0xc0012eefa0) Stream added, broadcasting: 3 I0321 22:05:48.897732 6 log.go:172] (0xc002b38630) Reply frame received for 3 I0321 22:05:48.897766 6 log.go:172] (0xc002b38630) (0xc0024f4e60) Create stream I0321 22:05:48.897780 6 log.go:172] (0xc002b38630) (0xc0024f4e60) Stream added, broadcasting: 5 I0321 22:05:48.898696 6 log.go:172] (0xc002b38630) Reply frame received for 5 I0321 22:05:48.959826 6 log.go:172] (0xc002b38630) Data frame received for 3 I0321 22:05:48.959860 6 log.go:172] (0xc0012eefa0) (3) Data frame handling I0321 22:05:48.959882 6 log.go:172] (0xc0012eefa0) (3) Data frame sent I0321 22:05:48.960362 6 log.go:172] (0xc002b38630) Data frame received for 5 I0321 22:05:48.960419 6 log.go:172] (0xc0024f4e60) (5) Data frame handling I0321 22:05:48.960658 6 log.go:172] (0xc002b38630) Data frame received for 3 I0321 22:05:48.960700 6 log.go:172] (0xc0012eefa0) (3) Data frame handling I0321 22:05:48.962339 6 log.go:172] (0xc002b38630) Data frame received for 1 I0321 22:05:48.962381 6 log.go:172] (0xc0010e1040) (1) Data frame handling I0321 22:05:48.962396 6 log.go:172] (0xc0010e1040) (1) Data frame sent I0321 22:05:48.962414 6 log.go:172] (0xc002b38630) (0xc0010e1040) Stream removed, broadcasting: 1 I0321 22:05:48.962443 6 log.go:172] (0xc002b38630) Go away received I0321 22:05:48.962611 6 log.go:172] (0xc002b38630) (0xc0010e1040) Stream removed, broadcasting: 1 I0321 22:05:48.962634 6 log.go:172] (0xc002b38630) (0xc0012eefa0) Stream removed, broadcasting: 3 I0321 22:05:48.962646 6 log.go:172] (0xc002b38630) (0xc0024f4e60) Stream removed, broadcasting: 5 Mar 21 22:05:48.962: INFO: Deleting pod dns-5420... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:48.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5420" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":207,"skipped":3209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:49.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 21 22:05:55.319: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:05:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1037" for this suite. • [SLOW TEST:6.217 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:05:55.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-8169 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8169 to expose endpoints map[] Mar 21 22:05:55.467: INFO: Get endpoints failed (7.916197ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 21 22:05:56.470: INFO: successfully validated that service multi-endpoint-test in namespace services-8169 exposes endpoints map[] (1.011018252s elapsed) STEP: Creating pod pod1 in namespace services-8169 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8169 to expose endpoints map[pod1:[100]] Mar 21 22:05:59.537: INFO: successfully validated that service multi-endpoint-test in namespace services-8169 exposes endpoints map[pod1:[100]] (3.060510963s elapsed) STEP: Creating pod pod2 in namespace services-8169 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8169 to expose endpoints map[pod1:[100] pod2:[101]] Mar 21 22:06:03.693: INFO: successfully validated that service multi-endpoint-test in namespace services-8169 exposes endpoints map[pod1:[100] pod2:[101]] (4.151556414s elapsed) STEP: Deleting pod pod1 in namespace services-8169 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8169 to expose endpoints map[pod2:[101]] Mar 21 22:06:04.755: INFO: successfully validated that service multi-endpoint-test in namespace services-8169 exposes endpoints map[pod2:[101]] (1.057447665s elapsed) STEP: Deleting pod pod2 in namespace services-8169 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8169 to expose endpoints map[] Mar 21 22:06:05.789: INFO: successfully validated that service multi-endpoint-test in namespace services-8169 exposes endpoints map[] (1.029651767s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:06:05.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8169" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.528 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":209,"skipped":3270,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:06:05.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 21 22:06:05.941: INFO: Waiting up to 5m0s for pod "pod-ecc18c52-76af-47f8-a58e-d84d3453e625" in namespace "emptydir-1901" to be "success or failure" Mar 21 22:06:05.944: INFO: Pod "pod-ecc18c52-76af-47f8-a58e-d84d3453e625": Phase="Pending", Reason="", readiness=false. Elapsed: 3.866818ms Mar 21 22:06:07.948: INFO: Pod "pod-ecc18c52-76af-47f8-a58e-d84d3453e625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007861829s Mar 21 22:06:09.953: INFO: Pod "pod-ecc18c52-76af-47f8-a58e-d84d3453e625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012270774s STEP: Saw pod success Mar 21 22:06:09.953: INFO: Pod "pod-ecc18c52-76af-47f8-a58e-d84d3453e625" satisfied condition "success or failure" Mar 21 22:06:09.956: INFO: Trying to get logs from node jerma-worker pod pod-ecc18c52-76af-47f8-a58e-d84d3453e625 container test-container: STEP: delete the pod Mar 21 22:06:09.978: INFO: Waiting for pod pod-ecc18c52-76af-47f8-a58e-d84d3453e625 to disappear Mar 21 22:06:09.982: INFO: Pod pod-ecc18c52-76af-47f8-a58e-d84d3453e625 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:06:09.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1901" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:06:09.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-4m8g STEP: Creating a pod to test atomic-volume-subpath Mar 21 22:06:10.090: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4m8g" in namespace "subpath-6300" to be "success or failure" Mar 21 22:06:10.096: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Pending", Reason="", readiness=false. Elapsed: 5.73699ms Mar 21 22:06:12.100: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009897512s Mar 21 22:06:14.105: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 4.014307951s Mar 21 22:06:16.109: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 6.018338632s Mar 21 22:06:18.113: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 8.022377605s Mar 21 22:06:20.117: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 10.026563996s Mar 21 22:06:22.121: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 12.030710042s Mar 21 22:06:24.126: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 14.035166323s Mar 21 22:06:26.135: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 16.044864622s Mar 21 22:06:28.141: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 18.050108989s Mar 21 22:06:30.144: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 20.05308411s Mar 21 22:06:32.148: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Running", Reason="", readiness=true. Elapsed: 22.057845477s Mar 21 22:06:34.152: INFO: Pod "pod-subpath-test-projected-4m8g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061854782s STEP: Saw pod success Mar 21 22:06:34.152: INFO: Pod "pod-subpath-test-projected-4m8g" satisfied condition "success or failure" Mar 21 22:06:34.156: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-4m8g container test-container-subpath-projected-4m8g: STEP: delete the pod Mar 21 22:06:34.174: INFO: Waiting for pod pod-subpath-test-projected-4m8g to disappear Mar 21 22:06:34.184: INFO: Pod pod-subpath-test-projected-4m8g no longer exists STEP: Deleting pod pod-subpath-test-projected-4m8g Mar 21 22:06:34.184: INFO: Deleting pod "pod-subpath-test-projected-4m8g" in namespace "subpath-6300" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:06:34.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6300" for this suite. • [SLOW TEST:24.217 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":211,"skipped":3319,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:06:34.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 21 22:06:38.317: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 21 22:06:53.416: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:06:53.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1356" for this suite. • [SLOW TEST:19.221 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":212,"skipped":3337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:06:53.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:06:53.512: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:06:59.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7678" for this suite. • [SLOW TEST:6.527 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":213,"skipped":3364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:06:59.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 21 22:07:04.547: INFO: Successfully updated pod "labelsupdatef2e80946-82de-4eb9-887c-cf51c196a4ad" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:07:06.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-393" for this suite. • [SLOW TEST:6.626 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3423,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:07:06.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6652.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6652.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6652.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6652.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6652.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6652.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 22:07:14.745: INFO: DNS probes using dns-6652/dns-test-28820d84-4c63-49c4-b23c-9d47d20ec10d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:07:14.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6652" for this suite. • [SLOW TEST:8.353 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":215,"skipped":3430,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:07:14.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 21 22:07:15.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9182' Mar 21 22:07:15.651: INFO: stderr: "" Mar 21 22:07:15.651: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 22:07:15.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9182' Mar 21 22:07:15.791: INFO: stderr: "" Mar 21 22:07:15.791: INFO: stdout: "update-demo-nautilus-qx4zg update-demo-nautilus-tcs5b " Mar 21 22:07:15.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qx4zg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:15.880: INFO: stderr: "" Mar 21 22:07:15.881: INFO: stdout: "" Mar 21 22:07:15.881: INFO: update-demo-nautilus-qx4zg is created but not running Mar 21 22:07:20.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9182' Mar 21 22:07:20.993: INFO: stderr: "" Mar 21 22:07:20.993: INFO: stdout: "update-demo-nautilus-qx4zg update-demo-nautilus-tcs5b " Mar 21 22:07:20.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qx4zg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:21.091: INFO: stderr: "" Mar 21 22:07:21.091: INFO: stdout: "true" Mar 21 22:07:21.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qx4zg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:21.181: INFO: stderr: "" Mar 21 22:07:21.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:07:21.181: INFO: validating pod update-demo-nautilus-qx4zg Mar 21 22:07:21.185: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:07:21.186: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:07:21.186: INFO: update-demo-nautilus-qx4zg is verified up and running Mar 21 22:07:21.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcs5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:21.272: INFO: stderr: "" Mar 21 22:07:21.272: INFO: stdout: "true" Mar 21 22:07:21.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcs5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:21.362: INFO: stderr: "" Mar 21 22:07:21.362: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:07:21.362: INFO: validating pod update-demo-nautilus-tcs5b Mar 21 22:07:21.366: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:07:21.366: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:07:21.366: INFO: update-demo-nautilus-tcs5b is verified up and running STEP: scaling down the replication controller Mar 21 22:07:21.368: INFO: scanned /root for discovery docs: Mar 21 22:07:21.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9182' Mar 21 22:07:22.507: INFO: stderr: "" Mar 21 22:07:22.507: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 22:07:22.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9182' Mar 21 22:07:22.628: INFO: stderr: "" Mar 21 22:07:22.628: INFO: stdout: "update-demo-nautilus-qx4zg update-demo-nautilus-tcs5b " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 22:07:27.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9182' Mar 21 22:07:27.726: INFO: stderr: "" Mar 21 22:07:27.726: INFO: stdout: "update-demo-nautilus-qx4zg update-demo-nautilus-tcs5b " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 22:07:32.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9182' Mar 21 22:07:32.825: INFO: stderr: "" Mar 21 22:07:32.825: INFO: stdout: "update-demo-nautilus-tcs5b " Mar 21 22:07:32.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcs5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:32.926: INFO: stderr: "" Mar 21 22:07:32.926: INFO: stdout: "true" Mar 21 22:07:32.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcs5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:33.020: INFO: stderr: "" Mar 21 22:07:33.020: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:07:33.020: INFO: validating pod update-demo-nautilus-tcs5b Mar 21 22:07:33.023: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:07:33.023: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:07:33.023: INFO: update-demo-nautilus-tcs5b is verified up and running STEP: scaling up the replication controller Mar 21 22:07:33.025: INFO: scanned /root for discovery docs: Mar 21 22:07:33.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9182' Mar 21 22:07:34.160: INFO: stderr: "" Mar 21 22:07:34.160: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 22:07:34.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9182' Mar 21 22:07:34.258: INFO: stderr: "" Mar 21 22:07:34.258: INFO: stdout: "update-demo-nautilus-crbkl update-demo-nautilus-tcs5b " Mar 21 22:07:34.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-crbkl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:34.345: INFO: stderr: "" Mar 21 22:07:34.345: INFO: stdout: "" Mar 21 22:07:34.345: INFO: update-demo-nautilus-crbkl is created but not running Mar 21 22:07:39.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9182' Mar 21 22:07:39.436: INFO: stderr: "" Mar 21 22:07:39.436: INFO: stdout: "update-demo-nautilus-crbkl update-demo-nautilus-tcs5b " Mar 21 22:07:39.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-crbkl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:39.524: INFO: stderr: "" Mar 21 22:07:39.524: INFO: stdout: "true" Mar 21 22:07:39.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-crbkl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:39.607: INFO: stderr: "" Mar 21 22:07:39.607: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:07:39.607: INFO: validating pod update-demo-nautilus-crbkl Mar 21 22:07:39.611: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:07:39.612: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:07:39.612: INFO: update-demo-nautilus-crbkl is verified up and running Mar 21 22:07:39.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcs5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:39.697: INFO: stderr: "" Mar 21 22:07:39.697: INFO: stdout: "true" Mar 21 22:07:39.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tcs5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9182' Mar 21 22:07:39.795: INFO: stderr: "" Mar 21 22:07:39.795: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:07:39.795: INFO: validating pod update-demo-nautilus-tcs5b Mar 21 22:07:39.806: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:07:39.806: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:07:39.806: INFO: update-demo-nautilus-tcs5b is verified up and running STEP: using delete to clean up resources Mar 21 22:07:39.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9182' Mar 21 22:07:39.917: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 22:07:39.917: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 21 22:07:39.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9182' Mar 21 22:07:40.028: INFO: stderr: "No resources found in kubectl-9182 namespace.\n" Mar 21 22:07:40.028: INFO: stdout: "" Mar 21 22:07:40.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9182 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 22:07:40.126: INFO: stderr: "" Mar 21 22:07:40.126: INFO: stdout: "update-demo-nautilus-crbkl\nupdate-demo-nautilus-tcs5b\n" Mar 21 22:07:40.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9182' Mar 21 22:07:40.728: INFO: stderr: "No resources found in kubectl-9182 namespace.\n" Mar 21 22:07:40.728: INFO: stdout: "" Mar 21 22:07:40.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9182 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 22:07:40.812: INFO: stderr: "" Mar 21 22:07:40.812: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:07:40.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9182" for this suite. • [SLOW TEST:25.882 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":216,"skipped":3436,"failed":0} [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:07:40.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 22:07:40.994: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31e18067-9b98-448e-8d87-048aacc0cf00" in namespace "downward-api-4063" to be "success or failure" Mar 21 22:07:41.016: INFO: Pod "downwardapi-volume-31e18067-9b98-448e-8d87-048aacc0cf00": Phase="Pending", Reason="", readiness=false. Elapsed: 22.146994ms Mar 21 22:07:43.020: INFO: Pod "downwardapi-volume-31e18067-9b98-448e-8d87-048aacc0cf00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026459303s Mar 21 22:07:45.025: INFO: Pod "downwardapi-volume-31e18067-9b98-448e-8d87-048aacc0cf00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03104075s STEP: Saw pod success Mar 21 22:07:45.025: INFO: Pod "downwardapi-volume-31e18067-9b98-448e-8d87-048aacc0cf00" satisfied condition "success or failure" Mar 21 22:07:45.036: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-31e18067-9b98-448e-8d87-048aacc0cf00 container client-container: STEP: delete the pod Mar 21 22:07:45.052: INFO: Waiting for pod downwardapi-volume-31e18067-9b98-448e-8d87-048aacc0cf00 to disappear Mar 21 22:07:45.057: INFO: Pod downwardapi-volume-31e18067-9b98-448e-8d87-048aacc0cf00 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:07:45.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4063" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3436,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:07:45.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 22:07:45.630: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 22:07:47.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425265, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425265, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425265, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425265, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 22:07:50.689: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:07:50.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2666" for this suite. STEP: Destroying namespace "webhook-2666-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.905 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":218,"skipped":3441,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:07:50.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:07:51.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4978' Mar 21 22:07:51.549: INFO: stderr: "" Mar 21 22:07:51.549: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 21 22:07:51.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4978' Mar 21 22:07:52.160: INFO: stderr: "" Mar 21 22:07:52.160: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 21 22:07:53.164: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 22:07:53.164: INFO: Found 0 / 1 Mar 21 22:07:54.164: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 22:07:54.165: INFO: Found 0 / 1 Mar 21 22:07:55.164: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 22:07:55.164: INFO: Found 1 / 1 Mar 21 22:07:55.164: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 21 22:07:55.168: INFO: Selector matched 1 pods for map[app:agnhost] Mar 21 22:07:55.168: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 21 22:07:55.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-lhxjs --namespace=kubectl-4978' Mar 21 22:07:55.279: INFO: stderr: "" Mar 21 22:07:55.279: INFO: stdout: "Name: agnhost-master-lhxjs\nNamespace: kubectl-4978\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Sat, 21 Mar 2020 22:07:51 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.239\nIPs:\n IP: 10.244.1.239\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://3ade2c72ba6f8a93353fc98a83e3ca247b002fb94b98f071e855ce8bec595b3d\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 21 Mar 2020 22:07:53 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-9gccm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-9gccm:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-9gccm\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-4978/agnhost-master-lhxjs to jerma-worker\n Normal Pulled 3s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker Started container agnhost-master\n" Mar 21 22:07:55.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4978' Mar 21 22:07:55.396: INFO: stderr: "" Mar 21 22:07:55.396: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4978\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-lhxjs\n" Mar 21 22:07:55.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4978' Mar 21 22:07:55.502: INFO: stderr: "" Mar 21 22:07:55.502: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4978\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.106.150.43\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.239:6379\nSession Affinity: None\nEvents: \n" Mar 21 22:07:55.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 21 22:07:55.630: INFO: stderr: "" Mar 21 22:07:55.631: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sat, 21 Mar 2020 22:07:47 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 21 Mar 2020 22:05:24 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 21 Mar 2020 22:05:24 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 21 Mar 2020 22:05:24 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 21 Mar 2020 22:05:24 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 6d3h\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 6d3h\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d3h\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 6d3h\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 6d3h\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 6d3h\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d3h\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 6d3h\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d3h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 21 22:07:55.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4978' Mar 21 22:07:55.753: INFO: stderr: "" Mar 21 22:07:55.753: INFO: stdout: "Name: kubectl-4978\nLabels: e2e-framework=kubectl\n e2e-run=45f670d0-e3a9-46e2-ab39-c3b5a85ec799\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:07:55.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4978" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":219,"skipped":3446,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:07:55.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 21 22:07:56.870: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 21 22:07:58.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425276, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425276, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425276, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425276, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 22:08:01.916: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:08:01.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:08:03.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5781" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.413 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":220,"skipped":3459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:08:03.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0321 22:08:04.501465 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 22:08:04.501: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:08:04.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7627" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":221,"skipped":3499,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:08:04.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-18b45541-d6c7-490e-99e6-0b4755f130e3 STEP: Creating a pod to test consume secrets Mar 21 22:08:04.597: INFO: Waiting up to 5m0s for pod "pod-secrets-c68c4613-0c13-42ce-8632-aebc5b7775d5" in namespace "secrets-7701" to be "success or failure" Mar 21 22:08:04.618: INFO: Pod "pod-secrets-c68c4613-0c13-42ce-8632-aebc5b7775d5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.670895ms Mar 21 22:08:06.622: INFO: Pod "pod-secrets-c68c4613-0c13-42ce-8632-aebc5b7775d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024647823s Mar 21 22:08:08.626: INFO: Pod "pod-secrets-c68c4613-0c13-42ce-8632-aebc5b7775d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028376751s STEP: Saw pod success Mar 21 22:08:08.626: INFO: Pod "pod-secrets-c68c4613-0c13-42ce-8632-aebc5b7775d5" satisfied condition "success or failure" Mar 21 22:08:08.628: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c68c4613-0c13-42ce-8632-aebc5b7775d5 container secret-volume-test: STEP: delete the pod Mar 21 22:08:08.774: INFO: Waiting for pod pod-secrets-c68c4613-0c13-42ce-8632-aebc5b7775d5 to disappear Mar 21 22:08:08.787: INFO: Pod pod-secrets-c68c4613-0c13-42ce-8632-aebc5b7775d5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:08:08.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7701" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3514,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:08:08.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2011 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2011 STEP: Creating statefulset with conflicting port in namespace statefulset-2011 STEP: Waiting until pod test-pod will start running in namespace statefulset-2011 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2011 Mar 21 22:08:14.916: INFO: Observed stateful pod in namespace: statefulset-2011, name: ss-0, uid: 7def5f1b-40dc-478a-ba22-7d4ab56d2e65, status phase: Pending. Waiting for statefulset controller to delete. Mar 21 22:08:15.492: INFO: Observed stateful pod in namespace: statefulset-2011, name: ss-0, uid: 7def5f1b-40dc-478a-ba22-7d4ab56d2e65, status phase: Failed. Waiting for statefulset controller to delete. Mar 21 22:08:15.499: INFO: Observed stateful pod in namespace: statefulset-2011, name: ss-0, uid: 7def5f1b-40dc-478a-ba22-7d4ab56d2e65, status phase: Failed. Waiting for statefulset controller to delete. Mar 21 22:08:15.503: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2011 STEP: Removing pod with conflicting port in namespace statefulset-2011 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2011 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 21 22:08:19.574: INFO: Deleting all statefulset in ns statefulset-2011 Mar 21 22:08:19.577: INFO: Scaling statefulset ss to 0 Mar 21 22:08:29.611: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 22:08:29.614: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:08:29.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2011" for this suite. • [SLOW TEST:20.854 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":223,"skipped":3521,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:08:29.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 21 22:08:34.245: INFO: Successfully updated pod "labelsupdate34eba1a1-e882-4796-a5c1-97055e643a46" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:08:36.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8880" for this suite. • [SLOW TEST:6.638 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3532,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:08:36.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:08:47.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-593" for this suite. • [SLOW TEST:11.081 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":225,"skipped":3535,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:08:47.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 21 22:08:47.431: INFO: Waiting up to 5m0s for pod "downward-api-22e0370b-faa1-4a55-a2bc-890cba934ea4" in namespace "downward-api-2998" to be "success or failure" Mar 21 22:08:47.474: INFO: Pod "downward-api-22e0370b-faa1-4a55-a2bc-890cba934ea4": Phase="Pending", Reason="", readiness=false. Elapsed: 43.348138ms Mar 21 22:08:49.495: INFO: Pod "downward-api-22e0370b-faa1-4a55-a2bc-890cba934ea4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064055128s Mar 21 22:08:51.499: INFO: Pod "downward-api-22e0370b-faa1-4a55-a2bc-890cba934ea4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068203287s STEP: Saw pod success Mar 21 22:08:51.499: INFO: Pod "downward-api-22e0370b-faa1-4a55-a2bc-890cba934ea4" satisfied condition "success or failure" Mar 21 22:08:51.502: INFO: Trying to get logs from node jerma-worker pod downward-api-22e0370b-faa1-4a55-a2bc-890cba934ea4 container dapi-container: STEP: delete the pod Mar 21 22:08:51.526: INFO: Waiting for pod downward-api-22e0370b-faa1-4a55-a2bc-890cba934ea4 to disappear Mar 21 22:08:51.530: INFO: Pod downward-api-22e0370b-faa1-4a55-a2bc-890cba934ea4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:08:51.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2998" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3545,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:08:51.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:08:51.592: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e4376e95-baed-4032-ab24-ca4044dec1b3" in namespace "security-context-test-824" to be "success or failure" Mar 21 22:08:51.612: INFO: Pod "busybox-user-65534-e4376e95-baed-4032-ab24-ca4044dec1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.271748ms Mar 21 22:08:53.615: INFO: Pod "busybox-user-65534-e4376e95-baed-4032-ab24-ca4044dec1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022941557s Mar 21 22:08:55.619: INFO: Pod "busybox-user-65534-e4376e95-baed-4032-ab24-ca4044dec1b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026831232s Mar 21 22:08:55.619: INFO: Pod "busybox-user-65534-e4376e95-baed-4032-ab24-ca4044dec1b3" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:08:55.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-824" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3547,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:08:55.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:08:59.777: INFO: Waiting up to 5m0s for pod "client-envvars-416817c3-d96a-492a-afa1-5a4fa3230e3e" in namespace "pods-6293" to be "success or failure" Mar 21 22:08:59.782: INFO: Pod "client-envvars-416817c3-d96a-492a-afa1-5a4fa3230e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.426903ms Mar 21 22:09:01.809: INFO: Pod "client-envvars-416817c3-d96a-492a-afa1-5a4fa3230e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032343371s Mar 21 22:09:03.813: INFO: Pod "client-envvars-416817c3-d96a-492a-afa1-5a4fa3230e3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036047125s STEP: Saw pod success Mar 21 22:09:03.813: INFO: Pod "client-envvars-416817c3-d96a-492a-afa1-5a4fa3230e3e" satisfied condition "success or failure" Mar 21 22:09:03.815: INFO: Trying to get logs from node jerma-worker pod client-envvars-416817c3-d96a-492a-afa1-5a4fa3230e3e container env3cont: STEP: delete the pod Mar 21 22:09:03.859: INFO: Waiting for pod client-envvars-416817c3-d96a-492a-afa1-5a4fa3230e3e to disappear Mar 21 22:09:03.861: INFO: Pod client-envvars-416817c3-d96a-492a-afa1-5a4fa3230e3e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:09:03.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6293" for this suite. • [SLOW TEST:8.239 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3581,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:09:03.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9596 STEP: creating replication controller nodeport-test in namespace services-9596 I0321 22:09:04.343478 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9596, replica count: 2 I0321 22:09:07.394007 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 22:09:10.394328 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 22:09:10.394: INFO: Creating new exec pod Mar 21 22:09:15.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9596 execpod5dzfn -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 21 22:09:15.621: INFO: stderr: "I0321 22:09:15.554496 3625 log.go:172] (0xc000a9ef20) (0xc000b3e5a0) Create stream\nI0321 22:09:15.554594 3625 log.go:172] (0xc000a9ef20) (0xc000b3e5a0) Stream added, broadcasting: 1\nI0321 22:09:15.559150 3625 log.go:172] (0xc000a9ef20) Reply frame received for 1\nI0321 22:09:15.559191 3625 log.go:172] (0xc000a9ef20) (0xc0006c8640) Create stream\nI0321 22:09:15.559202 3625 log.go:172] (0xc000a9ef20) (0xc0006c8640) Stream added, broadcasting: 3\nI0321 22:09:15.560231 3625 log.go:172] (0xc000a9ef20) Reply frame received for 3\nI0321 22:09:15.560278 3625 log.go:172] (0xc000a9ef20) (0xc0004bf400) Create stream\nI0321 22:09:15.560293 3625 log.go:172] (0xc000a9ef20) (0xc0004bf400) Stream added, broadcasting: 5\nI0321 22:09:15.561379 3625 log.go:172] (0xc000a9ef20) Reply frame received for 5\nI0321 22:09:15.615548 3625 log.go:172] (0xc000a9ef20) Data frame received for 5\nI0321 22:09:15.615569 3625 log.go:172] (0xc0004bf400) (5) Data frame handling\nI0321 22:09:15.615583 3625 log.go:172] (0xc0004bf400) (5) Data frame sent\nI0321 22:09:15.615590 3625 log.go:172] (0xc000a9ef20) Data frame received for 5\nI0321 22:09:15.615597 3625 log.go:172] (0xc0004bf400) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0321 22:09:15.615623 3625 log.go:172] (0xc0004bf400) (5) Data frame sent\nI0321 22:09:15.615971 3625 log.go:172] (0xc000a9ef20) Data frame received for 3\nI0321 22:09:15.616009 3625 log.go:172] (0xc0006c8640) (3) Data frame handling\nI0321 22:09:15.616084 3625 log.go:172] (0xc000a9ef20) Data frame received for 5\nI0321 22:09:15.616124 3625 log.go:172] (0xc0004bf400) (5) Data frame handling\nI0321 22:09:15.618056 3625 log.go:172] (0xc000a9ef20) Data frame received for 1\nI0321 22:09:15.618090 3625 log.go:172] (0xc000b3e5a0) (1) Data frame handling\nI0321 22:09:15.618110 3625 log.go:172] (0xc000b3e5a0) (1) Data frame sent\nI0321 22:09:15.618322 3625 log.go:172] (0xc000a9ef20) (0xc000b3e5a0) Stream removed, broadcasting: 1\nI0321 22:09:15.618382 3625 log.go:172] (0xc000a9ef20) Go away received\nI0321 22:09:15.618576 3625 log.go:172] (0xc000a9ef20) (0xc000b3e5a0) Stream removed, broadcasting: 1\nI0321 22:09:15.618596 3625 log.go:172] (0xc000a9ef20) (0xc0006c8640) Stream removed, broadcasting: 3\nI0321 22:09:15.618603 3625 log.go:172] (0xc000a9ef20) (0xc0004bf400) Stream removed, broadcasting: 5\n" Mar 21 22:09:15.621: INFO: stdout: "" Mar 21 22:09:15.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9596 execpod5dzfn -- /bin/sh -x -c nc -zv -t -w 2 10.98.80.102 80' Mar 21 22:09:15.822: INFO: stderr: "I0321 22:09:15.752724 3646 log.go:172] (0xc00099c790) (0xc000976000) Create stream\nI0321 22:09:15.752780 3646 log.go:172] (0xc00099c790) (0xc000976000) Stream added, broadcasting: 1\nI0321 22:09:15.755477 3646 log.go:172] (0xc00099c790) Reply frame received for 1\nI0321 22:09:15.755540 3646 log.go:172] (0xc00099c790) (0xc0006c3ae0) Create stream\nI0321 22:09:15.755568 3646 log.go:172] (0xc00099c790) (0xc0006c3ae0) Stream added, broadcasting: 3\nI0321 22:09:15.756682 3646 log.go:172] (0xc00099c790) Reply frame received for 3\nI0321 22:09:15.756725 3646 log.go:172] (0xc00099c790) (0xc0006c3cc0) Create stream\nI0321 22:09:15.756739 3646 log.go:172] (0xc00099c790) (0xc0006c3cc0) Stream added, broadcasting: 5\nI0321 22:09:15.757779 3646 log.go:172] (0xc00099c790) Reply frame received for 5\nI0321 22:09:15.816450 3646 log.go:172] (0xc00099c790) Data frame received for 5\nI0321 22:09:15.816478 3646 log.go:172] (0xc0006c3cc0) (5) Data frame handling\nI0321 22:09:15.816486 3646 log.go:172] (0xc0006c3cc0) (5) Data frame sent\nI0321 22:09:15.816492 3646 log.go:172] (0xc00099c790) Data frame received for 5\n+ nc -zv -t -w 2 10.98.80.102 80\nConnection to 10.98.80.102 80 port [tcp/http] succeeded!\nI0321 22:09:15.816496 3646 log.go:172] (0xc0006c3cc0) (5) Data frame handling\nI0321 22:09:15.816526 3646 log.go:172] (0xc00099c790) Data frame received for 3\nI0321 22:09:15.816538 3646 log.go:172] (0xc0006c3ae0) (3) Data frame handling\nI0321 22:09:15.818189 3646 log.go:172] (0xc00099c790) Data frame received for 1\nI0321 22:09:15.818209 3646 log.go:172] (0xc000976000) (1) Data frame handling\nI0321 22:09:15.818225 3646 log.go:172] (0xc000976000) (1) Data frame sent\nI0321 22:09:15.818238 3646 log.go:172] (0xc00099c790) (0xc000976000) Stream removed, broadcasting: 1\nI0321 22:09:15.818251 3646 log.go:172] (0xc00099c790) Go away received\nI0321 22:09:15.818543 3646 log.go:172] (0xc00099c790) (0xc000976000) Stream removed, broadcasting: 1\nI0321 22:09:15.818556 3646 log.go:172] (0xc00099c790) (0xc0006c3ae0) Stream removed, broadcasting: 3\nI0321 22:09:15.818562 3646 log.go:172] (0xc00099c790) (0xc0006c3cc0) Stream removed, broadcasting: 5\n" Mar 21 22:09:15.822: INFO: stdout: "" Mar 21 22:09:15.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9596 execpod5dzfn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30848' Mar 21 22:09:16.022: INFO: stderr: "I0321 22:09:15.960038 3668 log.go:172] (0xc000ac8840) (0xc000b1c3c0) Create stream\nI0321 22:09:15.960118 3668 log.go:172] (0xc000ac8840) (0xc000b1c3c0) Stream added, broadcasting: 1\nI0321 22:09:15.964927 3668 log.go:172] (0xc000ac8840) Reply frame received for 1\nI0321 22:09:15.964974 3668 log.go:172] (0xc000ac8840) (0xc00067a640) Create stream\nI0321 22:09:15.964988 3668 log.go:172] (0xc000ac8840) (0xc00067a640) Stream added, broadcasting: 3\nI0321 22:09:15.966040 3668 log.go:172] (0xc000ac8840) Reply frame received for 3\nI0321 22:09:15.966085 3668 log.go:172] (0xc000ac8840) (0xc0006ed400) Create stream\nI0321 22:09:15.966097 3668 log.go:172] (0xc000ac8840) (0xc0006ed400) Stream added, broadcasting: 5\nI0321 22:09:15.966949 3668 log.go:172] (0xc000ac8840) Reply frame received for 5\nI0321 22:09:16.016710 3668 log.go:172] (0xc000ac8840) Data frame received for 3\nI0321 22:09:16.016738 3668 log.go:172] (0xc00067a640) (3) Data frame handling\nI0321 22:09:16.016768 3668 log.go:172] (0xc000ac8840) Data frame received for 5\nI0321 22:09:16.016787 3668 log.go:172] (0xc0006ed400) (5) Data frame handling\nI0321 22:09:16.016805 3668 log.go:172] (0xc0006ed400) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30848\nConnection to 172.17.0.10 30848 port [tcp/30848] succeeded!\nI0321 22:09:16.016927 3668 log.go:172] (0xc000ac8840) Data frame received for 5\nI0321 22:09:16.016948 3668 log.go:172] (0xc0006ed400) (5) Data frame handling\nI0321 22:09:16.018659 3668 log.go:172] (0xc000ac8840) Data frame received for 1\nI0321 22:09:16.018673 3668 log.go:172] (0xc000b1c3c0) (1) Data frame handling\nI0321 22:09:16.018685 3668 log.go:172] (0xc000b1c3c0) (1) Data frame sent\nI0321 22:09:16.018820 3668 log.go:172] (0xc000ac8840) (0xc000b1c3c0) Stream removed, broadcasting: 1\nI0321 22:09:16.018985 3668 log.go:172] (0xc000ac8840) Go away received\nI0321 22:09:16.019034 3668 log.go:172] (0xc000ac8840) (0xc000b1c3c0) Stream removed, broadcasting: 1\nI0321 22:09:16.019043 3668 log.go:172] (0xc000ac8840) (0xc00067a640) Stream removed, broadcasting: 3\nI0321 22:09:16.019049 3668 log.go:172] (0xc000ac8840) (0xc0006ed400) Stream removed, broadcasting: 5\n" Mar 21 22:09:16.022: INFO: stdout: "" Mar 21 22:09:16.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9596 execpod5dzfn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30848' Mar 21 22:09:16.231: INFO: stderr: "I0321 22:09:16.159590 3691 log.go:172] (0xc0000f5600) (0xc0006961e0) Create stream\nI0321 22:09:16.159661 3691 log.go:172] (0xc0000f5600) (0xc0006961e0) Stream added, broadcasting: 1\nI0321 22:09:16.165772 3691 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0321 22:09:16.165818 3691 log.go:172] (0xc0000f5600) (0xc000696280) Create stream\nI0321 22:09:16.165831 3691 log.go:172] (0xc0000f5600) (0xc000696280) Stream added, broadcasting: 3\nI0321 22:09:16.167103 3691 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0321 22:09:16.167147 3691 log.go:172] (0xc0000f5600) (0xc0005d1900) Create stream\nI0321 22:09:16.167165 3691 log.go:172] (0xc0000f5600) (0xc0005d1900) Stream added, broadcasting: 5\nI0321 22:09:16.168033 3691 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0321 22:09:16.224309 3691 log.go:172] (0xc0000f5600) Data frame received for 5\nI0321 22:09:16.224336 3691 log.go:172] (0xc0005d1900) (5) Data frame handling\nI0321 22:09:16.224352 3691 log.go:172] (0xc0005d1900) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30848\nI0321 22:09:16.224535 3691 log.go:172] (0xc0000f5600) Data frame received for 5\nI0321 22:09:16.224551 3691 log.go:172] (0xc0005d1900) (5) Data frame handling\nI0321 22:09:16.224567 3691 log.go:172] (0xc0005d1900) (5) Data frame sent\nConnection to 172.17.0.8 30848 port [tcp/30848] succeeded!\nI0321 22:09:16.224907 3691 log.go:172] (0xc0000f5600) Data frame received for 5\nI0321 22:09:16.224924 3691 log.go:172] (0xc0005d1900) (5) Data frame handling\nI0321 22:09:16.225240 3691 log.go:172] (0xc0000f5600) Data frame received for 3\nI0321 22:09:16.225258 3691 log.go:172] (0xc000696280) (3) Data frame handling\nI0321 22:09:16.227120 3691 log.go:172] (0xc0000f5600) Data frame received for 1\nI0321 22:09:16.227138 3691 log.go:172] (0xc0006961e0) (1) Data frame handling\nI0321 22:09:16.227148 3691 log.go:172] (0xc0006961e0) (1) Data frame sent\nI0321 22:09:16.227160 3691 log.go:172] (0xc0000f5600) (0xc0006961e0) Stream removed, broadcasting: 1\nI0321 22:09:16.227216 3691 log.go:172] (0xc0000f5600) Go away received\nI0321 22:09:16.227439 3691 log.go:172] (0xc0000f5600) (0xc0006961e0) Stream removed, broadcasting: 1\nI0321 22:09:16.227450 3691 log.go:172] (0xc0000f5600) (0xc000696280) Stream removed, broadcasting: 3\nI0321 22:09:16.227456 3691 log.go:172] (0xc0000f5600) (0xc0005d1900) Stream removed, broadcasting: 5\n" Mar 21 22:09:16.231: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:09:16.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9596" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.372 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":229,"skipped":3597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:09:16.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 21 22:09:24.341: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 21 22:09:24.368: INFO: Pod pod-with-poststart-http-hook still exists Mar 21 22:09:26.368: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 21 22:09:26.372: INFO: Pod pod-with-poststart-http-hook still exists Mar 21 22:09:28.368: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 21 22:09:28.372: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:09:28.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7249" for this suite. • [SLOW TEST:12.138 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3703,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:09:28.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 21 22:09:28.453: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:09:28.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3105" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":231,"skipped":3706,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:09:28.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8v484 in namespace proxy-7023 I0321 22:09:28.652269 6 runners.go:189] Created replication controller with name: proxy-service-8v484, namespace: proxy-7023, replica count: 1 I0321 22:09:29.702676 6 runners.go:189] proxy-service-8v484 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 22:09:30.702909 6 runners.go:189] proxy-service-8v484 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 22:09:31.703119 6 runners.go:189] proxy-service-8v484 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0321 22:09:32.703434 6 runners.go:189] proxy-service-8v484 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 22:09:32.706: INFO: setup took 4.123106207s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 21 22:09:32.711: INFO: (0) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.852566ms) Mar 21 22:09:32.713: INFO: (0) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 6.2516ms) Mar 21 22:09:32.715: INFO: (0) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 8.346378ms) Mar 21 22:09:32.715: INFO: (0) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 8.505766ms) Mar 21 22:09:32.715: INFO: (0) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 8.388372ms) Mar 21 22:09:32.716: INFO: (0) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 9.553172ms) Mar 21 22:09:32.721: INFO: (0) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 13.985228ms) Mar 21 22:09:32.721: INFO: (0) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 14.570295ms) Mar 21 22:09:32.722: INFO: (0) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test (200; 26.030754ms) Mar 21 22:09:32.733: INFO: (0) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 26.014891ms) Mar 21 22:09:32.733: INFO: (0) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 26.228722ms) Mar 21 22:09:32.733: INFO: (0) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 26.606572ms) Mar 21 22:09:32.733: INFO: (0) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 26.569365ms) Mar 21 22:09:32.737: INFO: (1) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 3.392216ms) Mar 21 22:09:32.737: INFO: (1) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 3.422933ms) Mar 21 22:09:32.737: INFO: (1) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 3.996563ms) Mar 21 22:09:32.737: INFO: (1) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 4.057332ms) Mar 21 22:09:32.738: INFO: (1) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.214431ms) Mar 21 22:09:32.738: INFO: (1) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 4.139613ms) Mar 21 22:09:32.738: INFO: (1) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test<... (200; 5.330809ms) Mar 21 22:09:32.739: INFO: (1) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.673883ms) Mar 21 22:09:32.739: INFO: (1) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 5.729027ms) Mar 21 22:09:32.739: INFO: (1) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 5.90975ms) Mar 21 22:09:32.739: INFO: (1) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 6.02971ms) Mar 21 22:09:32.740: INFO: (1) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 6.058193ms) Mar 21 22:09:32.740: INFO: (1) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 6.088917ms) Mar 21 22:09:32.740: INFO: (1) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 6.20472ms) Mar 21 22:09:32.740: INFO: (1) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 6.118549ms) Mar 21 22:09:32.744: INFO: (2) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 4.730555ms) Mar 21 22:09:32.745: INFO: (2) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.872458ms) Mar 21 22:09:32.745: INFO: (2) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 5.426827ms) Mar 21 22:09:32.745: INFO: (2) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 5.446026ms) Mar 21 22:09:32.745: INFO: (2) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 5.427549ms) Mar 21 22:09:32.745: INFO: (2) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 5.548706ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 6.074343ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 6.022887ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 6.044927ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 6.209244ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 6.178311ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 6.23375ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 6.25356ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 6.293797ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 6.285485ms) Mar 21 22:09:32.746: INFO: (2) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 3.028554ms) Mar 21 22:09:32.749: INFO: (3) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 3.209902ms) Mar 21 22:09:32.751: INFO: (3) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test<... (200; 5.894411ms) Mar 21 22:09:32.752: INFO: (3) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 5.986446ms) Mar 21 22:09:32.752: INFO: (3) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 5.959616ms) Mar 21 22:09:32.752: INFO: (3) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 6.086256ms) Mar 21 22:09:32.752: INFO: (3) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 6.040753ms) Mar 21 22:09:32.752: INFO: (3) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 6.160752ms) Mar 21 22:09:32.752: INFO: (3) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 6.232508ms) Mar 21 22:09:32.753: INFO: (3) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 6.49744ms) Mar 21 22:09:32.753: INFO: (3) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 6.414268ms) Mar 21 22:09:32.753: INFO: (3) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 6.45082ms) Mar 21 22:09:32.753: INFO: (3) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 6.722603ms) Mar 21 22:09:32.759: INFO: (4) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 6.285887ms) Mar 21 22:09:32.759: INFO: (4) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 6.378472ms) Mar 21 22:09:32.759: INFO: (4) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 6.36588ms) Mar 21 22:09:32.759: INFO: (4) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 6.496913ms) Mar 21 22:09:32.760: INFO: (4) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 6.704841ms) Mar 21 22:09:32.760: INFO: (4) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 6.687122ms) Mar 21 22:09:32.760: INFO: (4) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test<... (200; 6.826849ms) Mar 21 22:09:32.760: INFO: (4) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 6.704634ms) Mar 21 22:09:32.760: INFO: (4) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 6.961ms) Mar 21 22:09:32.760: INFO: (4) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 7.128235ms) Mar 21 22:09:32.760: INFO: (4) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 7.30203ms) Mar 21 22:09:32.761: INFO: (4) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 7.954642ms) Mar 21 22:09:32.761: INFO: (4) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 7.99936ms) Mar 21 22:09:32.761: INFO: (4) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 7.996485ms) Mar 21 22:09:32.761: INFO: (4) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 8.011894ms) Mar 21 22:09:32.763: INFO: (5) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 2.126972ms) Mar 21 22:09:32.765: INFO: (5) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 3.951795ms) Mar 21 22:09:32.765: INFO: (5) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 4.039379ms) Mar 21 22:09:32.766: INFO: (5) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.555661ms) Mar 21 22:09:32.766: INFO: (5) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 4.911679ms) Mar 21 22:09:32.766: INFO: (5) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 4.921533ms) Mar 21 22:09:32.766: INFO: (5) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 4.972309ms) Mar 21 22:09:32.766: INFO: (5) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.033565ms) Mar 21 22:09:32.766: INFO: (5) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 5.043599ms) Mar 21 22:09:32.766: INFO: (5) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 5.035838ms) Mar 21 22:09:32.766: INFO: (5) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 5.137713ms) Mar 21 22:09:32.768: INFO: (5) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 6.522964ms) Mar 21 22:09:32.768: INFO: (5) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 6.633404ms) Mar 21 22:09:32.768: INFO: (5) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 6.609739ms) Mar 21 22:09:32.768: INFO: (5) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 6.637322ms) Mar 21 22:09:32.772: INFO: (6) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.573438ms) Mar 21 22:09:32.773: INFO: (6) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 5.510399ms) Mar 21 22:09:32.773: INFO: (6) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.536877ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 5.876595ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 5.917839ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 5.926058ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.889874ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 5.96556ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 5.887909ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 5.922665ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 5.907537ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 5.905639ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 5.970414ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 6.191053ms) Mar 21 22:09:32.774: INFO: (6) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test<... (200; 5.075536ms) Mar 21 22:09:32.780: INFO: (7) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 5.144207ms) Mar 21 22:09:32.780: INFO: (7) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 5.170593ms) Mar 21 22:09:32.780: INFO: (7) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 5.196396ms) Mar 21 22:09:32.780: INFO: (7) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.193026ms) Mar 21 22:09:32.780: INFO: (7) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 5.327088ms) Mar 21 22:09:32.780: INFO: (7) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 5.438585ms) Mar 21 22:09:32.781: INFO: (7) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 6.875837ms) Mar 21 22:09:32.782: INFO: (7) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 6.884205ms) Mar 21 22:09:32.782: INFO: (7) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 6.996459ms) Mar 21 22:09:32.782: INFO: (7) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 7.162195ms) Mar 21 22:09:32.782: INFO: (7) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 7.391712ms) Mar 21 22:09:32.782: INFO: (7) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 7.393076ms) Mar 21 22:09:32.786: INFO: (8) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 3.54137ms) Mar 21 22:09:32.786: INFO: (8) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 3.87334ms) Mar 21 22:09:32.791: INFO: (8) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 8.734467ms) Mar 21 22:09:32.791: INFO: (8) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 8.793823ms) Mar 21 22:09:32.791: INFO: (8) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 8.829904ms) Mar 21 22:09:32.792: INFO: (8) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test<... (200; 10.002364ms) Mar 21 22:09:32.794: INFO: (8) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 12.033871ms) Mar 21 22:09:32.794: INFO: (8) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 12.124128ms) Mar 21 22:09:32.794: INFO: (8) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 12.02943ms) Mar 21 22:09:32.794: INFO: (8) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 12.085996ms) Mar 21 22:09:32.794: INFO: (8) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 12.281027ms) Mar 21 22:09:32.795: INFO: (8) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 12.563722ms) Mar 21 22:09:32.795: INFO: (8) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 12.519967ms) Mar 21 22:09:32.796: INFO: (8) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 13.641184ms) Mar 21 22:09:32.799: INFO: (9) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 2.790384ms) Mar 21 22:09:32.799: INFO: (9) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 2.943694ms) Mar 21 22:09:32.799: INFO: (9) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 5.013757ms) Mar 21 22:09:32.801: INFO: (9) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 4.969337ms) Mar 21 22:09:32.801: INFO: (9) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 4.967597ms) Mar 21 22:09:32.801: INFO: (9) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 5.125067ms) Mar 21 22:09:32.801: INFO: (9) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.227431ms) Mar 21 22:09:32.801: INFO: (9) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 5.244087ms) Mar 21 22:09:32.801: INFO: (9) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 5.301747ms) Mar 21 22:09:32.801: INFO: (9) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 5.238587ms) Mar 21 22:09:32.801: INFO: (9) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 5.277525ms) Mar 21 22:09:32.804: INFO: (10) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 2.334916ms) Mar 21 22:09:32.804: INFO: (10) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 2.492288ms) Mar 21 22:09:32.804: INFO: (10) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 2.464731ms) Mar 21 22:09:32.806: INFO: (10) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 4.836096ms) Mar 21 22:09:32.806: INFO: (10) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 4.778059ms) Mar 21 22:09:32.806: INFO: (10) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 4.806153ms) Mar 21 22:09:32.806: INFO: (10) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 4.815068ms) Mar 21 22:09:32.806: INFO: (10) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 4.819645ms) Mar 21 22:09:32.806: INFO: (10) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 4.862956ms) Mar 21 22:09:32.806: INFO: (10) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 4.911424ms) Mar 21 22:09:32.807: INFO: (10) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 5.133875ms) Mar 21 22:09:32.807: INFO: (10) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 5.143457ms) Mar 21 22:09:32.807: INFO: (10) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.127966ms) Mar 21 22:09:32.807: INFO: (10) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.206909ms) Mar 21 22:09:32.807: INFO: (10) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 5.019222ms) Mar 21 22:09:32.812: INFO: (11) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 4.991307ms) Mar 21 22:09:32.812: INFO: (11) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 4.979315ms) Mar 21 22:09:32.812: INFO: (11) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 5.031263ms) Mar 21 22:09:32.812: INFO: (11) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 4.991685ms) Mar 21 22:09:32.812: INFO: (11) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 5.098888ms) Mar 21 22:09:32.812: INFO: (11) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 5.022004ms) Mar 21 22:09:32.812: INFO: (11) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 3.478612ms) Mar 21 22:09:32.816: INFO: (12) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test (200; 5.047544ms) Mar 21 22:09:32.817: INFO: (12) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 4.472717ms) Mar 21 22:09:32.817: INFO: (12) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 5.009755ms) Mar 21 22:09:32.817: INFO: (12) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 4.795333ms) Mar 21 22:09:32.817: INFO: (12) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 4.963404ms) Mar 21 22:09:32.817: INFO: (12) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 4.442213ms) Mar 21 22:09:32.817: INFO: (12) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 4.919739ms) Mar 21 22:09:32.820: INFO: (13) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 2.203427ms) Mar 21 22:09:32.820: INFO: (13) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 2.524518ms) Mar 21 22:09:32.820: INFO: (13) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 2.566192ms) Mar 21 22:09:32.821: INFO: (13) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 3.835502ms) Mar 21 22:09:32.822: INFO: (13) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 4.295804ms) Mar 21 22:09:32.822: INFO: (13) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 4.316443ms) Mar 21 22:09:32.822: INFO: (13) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 4.639609ms) Mar 21 22:09:32.822: INFO: (13) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 4.723691ms) Mar 21 22:09:32.822: INFO: (13) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 4.704403ms) Mar 21 22:09:32.822: INFO: (13) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 4.798599ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 5.336361ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 5.8527ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 5.903447ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 5.937216ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 5.923305ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 6.017865ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 6.029175ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 6.057063ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 6.110748ms) Mar 21 22:09:32.828: INFO: (14) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname1/proxy/: foo (200; 6.069347ms) Mar 21 22:09:32.829: INFO: (14) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 6.248905ms) Mar 21 22:09:32.829: INFO: (14) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 6.34004ms) Mar 21 22:09:32.829: INFO: (14) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 6.527388ms) Mar 21 22:09:32.829: INFO: (14) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 6.633097ms) Mar 21 22:09:32.829: INFO: (14) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 6.573774ms) Mar 21 22:09:32.829: INFO: (14) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 2.355693ms) Mar 21 22:09:32.832: INFO: (15) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 2.514486ms) Mar 21 22:09:32.833: INFO: (15) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 3.617737ms) Mar 21 22:09:32.833: INFO: (15) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.220155ms) Mar 21 22:09:32.833: INFO: (15) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname1/proxy/: tls baz (200; 4.28047ms) Mar 21 22:09:32.834: INFO: (15) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 4.481662ms) Mar 21 22:09:32.834: INFO: (15) /api/v1/namespaces/proxy-7023/services/https:proxy-service-8v484:tlsportname2/proxy/: tls qux (200; 4.592829ms) Mar 21 22:09:32.834: INFO: (15) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.962232ms) Mar 21 22:09:32.834: INFO: (15) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.019858ms) Mar 21 22:09:32.834: INFO: (15) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 4.981812ms) Mar 21 22:09:32.834: INFO: (15) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test (200; 5.059038ms) Mar 21 22:09:32.837: INFO: (16) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 2.978321ms) Mar 21 22:09:32.837: INFO: (16) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 2.957783ms) Mar 21 22:09:32.838: INFO: (16) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 3.228966ms) Mar 21 22:09:32.838: INFO: (16) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 3.312572ms) Mar 21 22:09:32.838: INFO: (16) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 4.006473ms) Mar 21 22:09:32.838: INFO: (16) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 4.065098ms) Mar 21 22:09:32.838: INFO: (16) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 4.071771ms) Mar 21 22:09:32.839: INFO: (16) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 4.093853ms) Mar 21 22:09:32.839: INFO: (16) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test<... (200; 2.484611ms) Mar 21 22:09:32.843: INFO: (17) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 4.094263ms) Mar 21 22:09:32.843: INFO: (17) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:462/proxy/: tls qux (200; 4.241148ms) Mar 21 22:09:32.844: INFO: (17) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 4.552878ms) Mar 21 22:09:32.844: INFO: (17) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 5.601194ms) Mar 21 22:09:32.848: INFO: (18) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 3.285578ms) Mar 21 22:09:32.848: INFO: (18) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:1080/proxy/: ... (200; 3.45798ms) Mar 21 22:09:32.849: INFO: (18) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:460/proxy/: tls baz (200; 4.072893ms) Mar 21 22:09:32.849: INFO: (18) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.078767ms) Mar 21 22:09:32.849: INFO: (18) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:160/proxy/: foo (200; 4.176598ms) Mar 21 22:09:32.849: INFO: (18) /api/v1/namespaces/proxy-7023/pods/http:proxy-service-8v484-l5xvd:162/proxy/: bar (200; 4.161705ms) Mar 21 22:09:32.849: INFO: (18) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd/proxy/: test (200; 4.225991ms) Mar 21 22:09:32.849: INFO: (18) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: ... (200; 1.942348ms) Mar 21 22:09:32.854: INFO: (19) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:162/proxy/: bar (200; 3.914215ms) Mar 21 22:09:32.854: INFO: (19) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:1080/proxy/: test<... (200; 3.91374ms) Mar 21 22:09:32.855: INFO: (19) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname2/proxy/: bar (200; 4.248444ms) Mar 21 22:09:32.855: INFO: (19) /api/v1/namespaces/proxy-7023/pods/https:proxy-service-8v484-l5xvd:443/proxy/: test (200; 5.511015ms) Mar 21 22:09:32.856: INFO: (19) /api/v1/namespaces/proxy-7023/services/proxy-service-8v484:portname1/proxy/: foo (200; 5.666932ms) Mar 21 22:09:32.856: INFO: (19) /api/v1/namespaces/proxy-7023/services/http:proxy-service-8v484:portname2/proxy/: bar (200; 5.66293ms) Mar 21 22:09:32.856: INFO: (19) /api/v1/namespaces/proxy-7023/pods/proxy-service-8v484-l5xvd:160/proxy/: foo (200; 5.707028ms) STEP: deleting ReplicationController proxy-service-8v484 in namespace proxy-7023, will wait for the garbage collector to delete the pods Mar 21 22:09:32.914: INFO: Deleting ReplicationController proxy-service-8v484 took: 6.515862ms Mar 21 22:09:33.215: INFO: Terminating ReplicationController proxy-service-8v484 pods took: 300.27309ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:09:39.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7023" for this suite. • [SLOW TEST:11.088 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":232,"skipped":3708,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:09:39.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 21 22:09:39.729: INFO: Waiting up to 5m0s for pod "downward-api-ca13f4b1-080f-40f0-a241-fa5e659fcedb" in namespace "downward-api-648" to be "success or failure" Mar 21 22:09:39.774: INFO: Pod "downward-api-ca13f4b1-080f-40f0-a241-fa5e659fcedb": Phase="Pending", Reason="", readiness=false. Elapsed: 45.434686ms Mar 21 22:09:41.778: INFO: Pod "downward-api-ca13f4b1-080f-40f0-a241-fa5e659fcedb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049503911s Mar 21 22:09:43.782: INFO: Pod "downward-api-ca13f4b1-080f-40f0-a241-fa5e659fcedb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053588834s STEP: Saw pod success Mar 21 22:09:43.782: INFO: Pod "downward-api-ca13f4b1-080f-40f0-a241-fa5e659fcedb" satisfied condition "success or failure" Mar 21 22:09:43.786: INFO: Trying to get logs from node jerma-worker pod downward-api-ca13f4b1-080f-40f0-a241-fa5e659fcedb container dapi-container: STEP: delete the pod Mar 21 22:09:43.817: INFO: Waiting for pod downward-api-ca13f4b1-080f-40f0-a241-fa5e659fcedb to disappear Mar 21 22:09:43.822: INFO: Pod downward-api-ca13f4b1-080f-40f0-a241-fa5e659fcedb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:09:43.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-648" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3720,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:09:43.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-551.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-551.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-551.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 22:09:49.927: INFO: DNS probes using dns-test-8a4a7cb9-f064-4709-a985-82b90463ac49 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-551.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-551.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-551.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 22:09:56.085: INFO: File wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:09:56.089: INFO: File jessie_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:09:56.089: INFO: Lookups using dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 failed for: [wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local jessie_udp@dns-test-service-3.dns-551.svc.cluster.local] Mar 21 22:10:01.094: INFO: File wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:10:01.098: INFO: File jessie_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:10:01.098: INFO: Lookups using dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 failed for: [wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local jessie_udp@dns-test-service-3.dns-551.svc.cluster.local] Mar 21 22:10:06.094: INFO: File wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:10:06.098: INFO: File jessie_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:10:06.098: INFO: Lookups using dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 failed for: [wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local jessie_udp@dns-test-service-3.dns-551.svc.cluster.local] Mar 21 22:10:11.093: INFO: File wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:10:11.116: INFO: File jessie_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:10:11.116: INFO: Lookups using dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 failed for: [wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local jessie_udp@dns-test-service-3.dns-551.svc.cluster.local] Mar 21 22:10:16.094: INFO: File wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:10:16.098: INFO: File jessie_udp@dns-test-service-3.dns-551.svc.cluster.local from pod dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 22:10:16.098: INFO: Lookups using dns-551/dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 failed for: [wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local jessie_udp@dns-test-service-3.dns-551.svc.cluster.local] Mar 21 22:10:21.101: INFO: DNS probes using dns-test-df51e73c-6761-458c-9f01-6da9428cbde2 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-551.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-551.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-551.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-551.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 22:10:27.282: INFO: DNS probes using dns-test-6b40a305-2740-4fd6-a51b-9ad9c274d6d8 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:10:27.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-551" for this suite. • [SLOW TEST:43.557 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":234,"skipped":3747,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:10:27.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:10:27.710: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-8fbc5839-2d67-4b17-a3f0-5afa87a6a16d" in namespace "security-context-test-2664" to be "success or failure" Mar 21 22:10:27.722: INFO: Pod "busybox-privileged-false-8fbc5839-2d67-4b17-a3f0-5afa87a6a16d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.289205ms Mar 21 22:10:29.726: INFO: Pod "busybox-privileged-false-8fbc5839-2d67-4b17-a3f0-5afa87a6a16d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015648989s Mar 21 22:10:31.730: INFO: Pod "busybox-privileged-false-8fbc5839-2d67-4b17-a3f0-5afa87a6a16d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020221401s Mar 21 22:10:31.730: INFO: Pod "busybox-privileged-false-8fbc5839-2d67-4b17-a3f0-5afa87a6a16d" satisfied condition "success or failure" Mar 21 22:10:31.736: INFO: Got logs for pod "busybox-privileged-false-8fbc5839-2d67-4b17-a3f0-5afa87a6a16d": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:10:31.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2664" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3767,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:10:31.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:10:35.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7289" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":236,"skipped":3774,"failed":0} SSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:10:36.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 21 22:10:46.175: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:46.175: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:46.213956 6 log.go:172] (0xc000f86000) (0xc000b4f680) Create stream I0321 22:10:46.213994 6 log.go:172] (0xc000f86000) (0xc000b4f680) Stream added, broadcasting: 1 I0321 22:10:46.216101 6 log.go:172] (0xc000f86000) Reply frame received for 1 I0321 22:10:46.216145 6 log.go:172] (0xc000f86000) (0xc00106a0a0) Create stream I0321 22:10:46.216162 6 log.go:172] (0xc000f86000) (0xc00106a0a0) Stream added, broadcasting: 3 I0321 22:10:46.216996 6 log.go:172] (0xc000f86000) Reply frame received for 3 I0321 22:10:46.217031 6 log.go:172] (0xc000f86000) (0xc0024f48c0) Create stream I0321 22:10:46.217041 6 log.go:172] (0xc000f86000) (0xc0024f48c0) Stream added, broadcasting: 5 I0321 22:10:46.218226 6 log.go:172] (0xc000f86000) Reply frame received for 5 I0321 22:10:46.300988 6 log.go:172] (0xc000f86000) Data frame received for 5 I0321 22:10:46.301018 6 log.go:172] (0xc0024f48c0) (5) Data frame handling I0321 22:10:46.301048 6 log.go:172] (0xc000f86000) Data frame received for 3 I0321 22:10:46.301094 6 log.go:172] (0xc00106a0a0) (3) Data frame handling I0321 22:10:46.301283 6 log.go:172] (0xc00106a0a0) (3) Data frame sent I0321 22:10:46.301311 6 log.go:172] (0xc000f86000) Data frame received for 3 I0321 22:10:46.301325 6 log.go:172] (0xc00106a0a0) (3) Data frame handling I0321 22:10:46.303064 6 log.go:172] (0xc000f86000) Data frame received for 1 I0321 22:10:46.303082 6 log.go:172] (0xc000b4f680) (1) Data frame handling I0321 22:10:46.303103 6 log.go:172] (0xc000b4f680) (1) Data frame sent I0321 22:10:46.303120 6 log.go:172] (0xc000f86000) (0xc000b4f680) Stream removed, broadcasting: 1 I0321 22:10:46.303284 6 log.go:172] (0xc000f86000) (0xc000b4f680) Stream removed, broadcasting: 1 I0321 22:10:46.303333 6 log.go:172] (0xc000f86000) (0xc00106a0a0) Stream removed, broadcasting: 3 I0321 22:10:46.303370 6 log.go:172] (0xc000f86000) (0xc0024f48c0) Stream removed, broadcasting: 5 Mar 21 22:10:46.303: INFO: Exec stderr: "" I0321 22:10:46.303396 6 log.go:172] (0xc000f86000) Go away received Mar 21 22:10:46.303: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:46.303: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:46.336380 6 log.go:172] (0xc000f86790) (0xc000b4ff40) Create stream I0321 22:10:46.336409 6 log.go:172] (0xc000f86790) (0xc000b4ff40) Stream added, broadcasting: 1 I0321 22:10:46.338331 6 log.go:172] (0xc000f86790) Reply frame received for 1 I0321 22:10:46.338377 6 log.go:172] (0xc000f86790) (0xc0024f4a00) Create stream I0321 22:10:46.338390 6 log.go:172] (0xc000f86790) (0xc0024f4a00) Stream added, broadcasting: 3 I0321 22:10:46.339331 6 log.go:172] (0xc000f86790) Reply frame received for 3 I0321 22:10:46.339361 6 log.go:172] (0xc000f86790) (0xc00106a140) Create stream I0321 22:10:46.339371 6 log.go:172] (0xc000f86790) (0xc00106a140) Stream added, broadcasting: 5 I0321 22:10:46.340294 6 log.go:172] (0xc000f86790) Reply frame received for 5 I0321 22:10:46.419804 6 log.go:172] (0xc000f86790) Data frame received for 5 I0321 22:10:46.419825 6 log.go:172] (0xc00106a140) (5) Data frame handling I0321 22:10:46.419865 6 log.go:172] (0xc000f86790) Data frame received for 3 I0321 22:10:46.419907 6 log.go:172] (0xc0024f4a00) (3) Data frame handling I0321 22:10:46.419945 6 log.go:172] (0xc0024f4a00) (3) Data frame sent I0321 22:10:46.419971 6 log.go:172] (0xc000f86790) Data frame received for 3 I0321 22:10:46.419989 6 log.go:172] (0xc0024f4a00) (3) Data frame handling I0321 22:10:46.421635 6 log.go:172] (0xc000f86790) Data frame received for 1 I0321 22:10:46.421660 6 log.go:172] (0xc000b4ff40) (1) Data frame handling I0321 22:10:46.421816 6 log.go:172] (0xc000b4ff40) (1) Data frame sent I0321 22:10:46.421835 6 log.go:172] (0xc000f86790) (0xc000b4ff40) Stream removed, broadcasting: 1 I0321 22:10:46.421856 6 log.go:172] (0xc000f86790) Go away received I0321 22:10:46.422000 6 log.go:172] (0xc000f86790) (0xc000b4ff40) Stream removed, broadcasting: 1 I0321 22:10:46.422030 6 log.go:172] (0xc000f86790) (0xc0024f4a00) Stream removed, broadcasting: 3 I0321 22:10:46.422043 6 log.go:172] (0xc000f86790) (0xc00106a140) Stream removed, broadcasting: 5 Mar 21 22:10:46.422: INFO: Exec stderr: "" Mar 21 22:10:46.422: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:46.422: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:46.455381 6 log.go:172] (0xc000f86dc0) (0xc0011a8820) Create stream I0321 22:10:46.455409 6 log.go:172] (0xc000f86dc0) (0xc0011a8820) Stream added, broadcasting: 1 I0321 22:10:46.457525 6 log.go:172] (0xc000f86dc0) Reply frame received for 1 I0321 22:10:46.457559 6 log.go:172] (0xc000f86dc0) (0xc0011a88c0) Create stream I0321 22:10:46.457572 6 log.go:172] (0xc000f86dc0) (0xc0011a88c0) Stream added, broadcasting: 3 I0321 22:10:46.458448 6 log.go:172] (0xc000f86dc0) Reply frame received for 3 I0321 22:10:46.458479 6 log.go:172] (0xc000f86dc0) (0xc0023517c0) Create stream I0321 22:10:46.458497 6 log.go:172] (0xc000f86dc0) (0xc0023517c0) Stream added, broadcasting: 5 I0321 22:10:46.459421 6 log.go:172] (0xc000f86dc0) Reply frame received for 5 I0321 22:10:46.529455 6 log.go:172] (0xc000f86dc0) Data frame received for 5 I0321 22:10:46.529498 6 log.go:172] (0xc000f86dc0) Data frame received for 3 I0321 22:10:46.529542 6 log.go:172] (0xc0011a88c0) (3) Data frame handling I0321 22:10:46.529567 6 log.go:172] (0xc0011a88c0) (3) Data frame sent I0321 22:10:46.529578 6 log.go:172] (0xc000f86dc0) Data frame received for 3 I0321 22:10:46.529588 6 log.go:172] (0xc0011a88c0) (3) Data frame handling I0321 22:10:46.529614 6 log.go:172] (0xc0023517c0) (5) Data frame handling I0321 22:10:46.531171 6 log.go:172] (0xc000f86dc0) Data frame received for 1 I0321 22:10:46.531203 6 log.go:172] (0xc0011a8820) (1) Data frame handling I0321 22:10:46.531221 6 log.go:172] (0xc0011a8820) (1) Data frame sent I0321 22:10:46.531238 6 log.go:172] (0xc000f86dc0) (0xc0011a8820) Stream removed, broadcasting: 1 I0321 22:10:46.531261 6 log.go:172] (0xc000f86dc0) Go away received I0321 22:10:46.531383 6 log.go:172] (0xc000f86dc0) (0xc0011a8820) Stream removed, broadcasting: 1 I0321 22:10:46.531416 6 log.go:172] (0xc000f86dc0) (0xc0011a88c0) Stream removed, broadcasting: 3 I0321 22:10:46.531434 6 log.go:172] (0xc000f86dc0) (0xc0023517c0) Stream removed, broadcasting: 5 Mar 21 22:10:46.531: INFO: Exec stderr: "" Mar 21 22:10:46.531: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:46.531: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:46.573650 6 log.go:172] (0xc002b384d0) (0xc002351a40) Create stream I0321 22:10:46.573680 6 log.go:172] (0xc002b384d0) (0xc002351a40) Stream added, broadcasting: 1 I0321 22:10:46.575519 6 log.go:172] (0xc002b384d0) Reply frame received for 1 I0321 22:10:46.575559 6 log.go:172] (0xc002b384d0) (0xc001f720a0) Create stream I0321 22:10:46.575572 6 log.go:172] (0xc002b384d0) (0xc001f720a0) Stream added, broadcasting: 3 I0321 22:10:46.576641 6 log.go:172] (0xc002b384d0) Reply frame received for 3 I0321 22:10:46.576694 6 log.go:172] (0xc002b384d0) (0xc001f72140) Create stream I0321 22:10:46.576710 6 log.go:172] (0xc002b384d0) (0xc001f72140) Stream added, broadcasting: 5 I0321 22:10:46.577758 6 log.go:172] (0xc002b384d0) Reply frame received for 5 I0321 22:10:46.641692 6 log.go:172] (0xc002b384d0) Data frame received for 3 I0321 22:10:46.641730 6 log.go:172] (0xc002b384d0) Data frame received for 5 I0321 22:10:46.641761 6 log.go:172] (0xc001f72140) (5) Data frame handling I0321 22:10:46.641804 6 log.go:172] (0xc001f720a0) (3) Data frame handling I0321 22:10:46.641826 6 log.go:172] (0xc001f720a0) (3) Data frame sent I0321 22:10:46.641840 6 log.go:172] (0xc002b384d0) Data frame received for 3 I0321 22:10:46.641853 6 log.go:172] (0xc001f720a0) (3) Data frame handling I0321 22:10:46.643260 6 log.go:172] (0xc002b384d0) Data frame received for 1 I0321 22:10:46.643278 6 log.go:172] (0xc002351a40) (1) Data frame handling I0321 22:10:46.643296 6 log.go:172] (0xc002351a40) (1) Data frame sent I0321 22:10:46.643312 6 log.go:172] (0xc002b384d0) (0xc002351a40) Stream removed, broadcasting: 1 I0321 22:10:46.643412 6 log.go:172] (0xc002b384d0) Go away received I0321 22:10:46.643459 6 log.go:172] (0xc002b384d0) (0xc002351a40) Stream removed, broadcasting: 1 I0321 22:10:46.643498 6 log.go:172] (0xc002b384d0) (0xc001f720a0) Stream removed, broadcasting: 3 I0321 22:10:46.643520 6 log.go:172] (0xc002b384d0) (0xc001f72140) Stream removed, broadcasting: 5 Mar 21 22:10:46.643: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 21 22:10:46.643: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:46.643: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:46.676364 6 log.go:172] (0xc000db4420) (0xc001f723c0) Create stream I0321 22:10:46.676388 6 log.go:172] (0xc000db4420) (0xc001f723c0) Stream added, broadcasting: 1 I0321 22:10:46.678690 6 log.go:172] (0xc000db4420) Reply frame received for 1 I0321 22:10:46.678748 6 log.go:172] (0xc000db4420) (0xc002351b80) Create stream I0321 22:10:46.678765 6 log.go:172] (0xc000db4420) (0xc002351b80) Stream added, broadcasting: 3 I0321 22:10:46.679586 6 log.go:172] (0xc000db4420) Reply frame received for 3 I0321 22:10:46.679620 6 log.go:172] (0xc000db4420) (0xc0024f4aa0) Create stream I0321 22:10:46.679638 6 log.go:172] (0xc000db4420) (0xc0024f4aa0) Stream added, broadcasting: 5 I0321 22:10:46.680498 6 log.go:172] (0xc000db4420) Reply frame received for 5 I0321 22:10:46.744524 6 log.go:172] (0xc000db4420) Data frame received for 3 I0321 22:10:46.744550 6 log.go:172] (0xc002351b80) (3) Data frame handling I0321 22:10:46.744565 6 log.go:172] (0xc002351b80) (3) Data frame sent I0321 22:10:46.744665 6 log.go:172] (0xc000db4420) Data frame received for 3 I0321 22:10:46.744728 6 log.go:172] (0xc002351b80) (3) Data frame handling I0321 22:10:46.744783 6 log.go:172] (0xc000db4420) Data frame received for 5 I0321 22:10:46.744812 6 log.go:172] (0xc0024f4aa0) (5) Data frame handling I0321 22:10:46.746348 6 log.go:172] (0xc000db4420) Data frame received for 1 I0321 22:10:46.746375 6 log.go:172] (0xc001f723c0) (1) Data frame handling I0321 22:10:46.746392 6 log.go:172] (0xc001f723c0) (1) Data frame sent I0321 22:10:46.746410 6 log.go:172] (0xc000db4420) (0xc001f723c0) Stream removed, broadcasting: 1 I0321 22:10:46.746433 6 log.go:172] (0xc000db4420) Go away received I0321 22:10:46.746510 6 log.go:172] (0xc000db4420) (0xc001f723c0) Stream removed, broadcasting: 1 I0321 22:10:46.746547 6 log.go:172] (0xc000db4420) (0xc002351b80) Stream removed, broadcasting: 3 I0321 22:10:46.746571 6 log.go:172] (0xc000db4420) (0xc0024f4aa0) Stream removed, broadcasting: 5 Mar 21 22:10:46.746: INFO: Exec stderr: "" Mar 21 22:10:46.746: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:46.746: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:46.784490 6 log.go:172] (0xc000f873f0) (0xc0011a8dc0) Create stream I0321 22:10:46.784522 6 log.go:172] (0xc000f873f0) (0xc0011a8dc0) Stream added, broadcasting: 1 I0321 22:10:46.791080 6 log.go:172] (0xc000f873f0) Reply frame received for 1 I0321 22:10:46.791133 6 log.go:172] (0xc000f873f0) (0xc00106a320) Create stream I0321 22:10:46.791150 6 log.go:172] (0xc000f873f0) (0xc00106a320) Stream added, broadcasting: 3 I0321 22:10:46.792219 6 log.go:172] (0xc000f873f0) Reply frame received for 3 I0321 22:10:46.792272 6 log.go:172] (0xc000f873f0) (0xc001f725a0) Create stream I0321 22:10:46.792293 6 log.go:172] (0xc000f873f0) (0xc001f725a0) Stream added, broadcasting: 5 I0321 22:10:46.793380 6 log.go:172] (0xc000f873f0) Reply frame received for 5 I0321 22:10:46.862303 6 log.go:172] (0xc000f873f0) Data frame received for 5 I0321 22:10:46.862441 6 log.go:172] (0xc001f725a0) (5) Data frame handling I0321 22:10:46.862485 6 log.go:172] (0xc000f873f0) Data frame received for 3 I0321 22:10:46.862504 6 log.go:172] (0xc00106a320) (3) Data frame handling I0321 22:10:46.862517 6 log.go:172] (0xc00106a320) (3) Data frame sent I0321 22:10:46.862536 6 log.go:172] (0xc000f873f0) Data frame received for 3 I0321 22:10:46.862548 6 log.go:172] (0xc00106a320) (3) Data frame handling I0321 22:10:46.864243 6 log.go:172] (0xc000f873f0) Data frame received for 1 I0321 22:10:46.864264 6 log.go:172] (0xc0011a8dc0) (1) Data frame handling I0321 22:10:46.864273 6 log.go:172] (0xc0011a8dc0) (1) Data frame sent I0321 22:10:46.864282 6 log.go:172] (0xc000f873f0) (0xc0011a8dc0) Stream removed, broadcasting: 1 I0321 22:10:46.864333 6 log.go:172] (0xc000f873f0) Go away received I0321 22:10:46.864376 6 log.go:172] (0xc000f873f0) (0xc0011a8dc0) Stream removed, broadcasting: 1 I0321 22:10:46.864406 6 log.go:172] (0xc000f873f0) (0xc00106a320) Stream removed, broadcasting: 3 I0321 22:10:46.864422 6 log.go:172] (0xc000f873f0) (0xc001f725a0) Stream removed, broadcasting: 5 Mar 21 22:10:46.864: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 21 22:10:46.864: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:46.864: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:46.903393 6 log.go:172] (0xc002b38a50) (0xc002351e00) Create stream I0321 22:10:46.903419 6 log.go:172] (0xc002b38a50) (0xc002351e00) Stream added, broadcasting: 1 I0321 22:10:46.905767 6 log.go:172] (0xc002b38a50) Reply frame received for 1 I0321 22:10:46.905803 6 log.go:172] (0xc002b38a50) (0xc00106a5a0) Create stream I0321 22:10:46.905822 6 log.go:172] (0xc002b38a50) (0xc00106a5a0) Stream added, broadcasting: 3 I0321 22:10:46.906805 6 log.go:172] (0xc002b38a50) Reply frame received for 3 I0321 22:10:46.906858 6 log.go:172] (0xc002b38a50) (0xc00106a780) Create stream I0321 22:10:46.906876 6 log.go:172] (0xc002b38a50) (0xc00106a780) Stream added, broadcasting: 5 I0321 22:10:46.907910 6 log.go:172] (0xc002b38a50) Reply frame received for 5 I0321 22:10:46.977969 6 log.go:172] (0xc002b38a50) Data frame received for 5 I0321 22:10:46.978083 6 log.go:172] (0xc00106a780) (5) Data frame handling I0321 22:10:46.978131 6 log.go:172] (0xc002b38a50) Data frame received for 3 I0321 22:10:46.978175 6 log.go:172] (0xc00106a5a0) (3) Data frame handling I0321 22:10:46.978249 6 log.go:172] (0xc00106a5a0) (3) Data frame sent I0321 22:10:46.978268 6 log.go:172] (0xc002b38a50) Data frame received for 3 I0321 22:10:46.978279 6 log.go:172] (0xc00106a5a0) (3) Data frame handling I0321 22:10:46.979717 6 log.go:172] (0xc002b38a50) Data frame received for 1 I0321 22:10:46.979736 6 log.go:172] (0xc002351e00) (1) Data frame handling I0321 22:10:46.979752 6 log.go:172] (0xc002351e00) (1) Data frame sent I0321 22:10:46.979767 6 log.go:172] (0xc002b38a50) (0xc002351e00) Stream removed, broadcasting: 1 I0321 22:10:46.979858 6 log.go:172] (0xc002b38a50) (0xc002351e00) Stream removed, broadcasting: 1 I0321 22:10:46.979872 6 log.go:172] (0xc002b38a50) (0xc00106a5a0) Stream removed, broadcasting: 3 I0321 22:10:46.980024 6 log.go:172] (0xc002b38a50) (0xc00106a780) Stream removed, broadcasting: 5 Mar 21 22:10:46.980: INFO: Exec stderr: "" I0321 22:10:46.980062 6 log.go:172] (0xc002b38a50) Go away received Mar 21 22:10:46.980: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:46.980: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:47.012982 6 log.go:172] (0xc002b39080) (0xc001b20000) Create stream I0321 22:10:47.013009 6 log.go:172] (0xc002b39080) (0xc001b20000) Stream added, broadcasting: 1 I0321 22:10:47.015175 6 log.go:172] (0xc002b39080) Reply frame received for 1 I0321 22:10:47.015211 6 log.go:172] (0xc002b39080) (0xc0011a8fa0) Create stream I0321 22:10:47.015224 6 log.go:172] (0xc002b39080) (0xc0011a8fa0) Stream added, broadcasting: 3 I0321 22:10:47.016041 6 log.go:172] (0xc002b39080) Reply frame received for 3 I0321 22:10:47.016068 6 log.go:172] (0xc002b39080) (0xc00106ab40) Create stream I0321 22:10:47.016079 6 log.go:172] (0xc002b39080) (0xc00106ab40) Stream added, broadcasting: 5 I0321 22:10:47.017075 6 log.go:172] (0xc002b39080) Reply frame received for 5 I0321 22:10:47.078047 6 log.go:172] (0xc002b39080) Data frame received for 5 I0321 22:10:47.078088 6 log.go:172] (0xc00106ab40) (5) Data frame handling I0321 22:10:47.078127 6 log.go:172] (0xc002b39080) Data frame received for 3 I0321 22:10:47.078150 6 log.go:172] (0xc0011a8fa0) (3) Data frame handling I0321 22:10:47.078171 6 log.go:172] (0xc0011a8fa0) (3) Data frame sent I0321 22:10:47.078194 6 log.go:172] (0xc002b39080) Data frame received for 3 I0321 22:10:47.078212 6 log.go:172] (0xc0011a8fa0) (3) Data frame handling I0321 22:10:47.079364 6 log.go:172] (0xc002b39080) Data frame received for 1 I0321 22:10:47.079409 6 log.go:172] (0xc001b20000) (1) Data frame handling I0321 22:10:47.079455 6 log.go:172] (0xc001b20000) (1) Data frame sent I0321 22:10:47.079498 6 log.go:172] (0xc002b39080) (0xc001b20000) Stream removed, broadcasting: 1 I0321 22:10:47.079607 6 log.go:172] (0xc002b39080) (0xc001b20000) Stream removed, broadcasting: 1 I0321 22:10:47.079633 6 log.go:172] (0xc002b39080) (0xc0011a8fa0) Stream removed, broadcasting: 3 I0321 22:10:47.079663 6 log.go:172] (0xc002b39080) (0xc00106ab40) Stream removed, broadcasting: 5 Mar 21 22:10:47.079: INFO: Exec stderr: "" Mar 21 22:10:47.079: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0321 22:10:47.079751 6 log.go:172] (0xc002b39080) Go away received Mar 21 22:10:47.079: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:47.102752 6 log.go:172] (0xc000f87a20) (0xc0011a92c0) Create stream I0321 22:10:47.102773 6 log.go:172] (0xc000f87a20) (0xc0011a92c0) Stream added, broadcasting: 1 I0321 22:10:47.108823 6 log.go:172] (0xc000f87a20) Reply frame received for 1 I0321 22:10:47.108865 6 log.go:172] (0xc000f87a20) (0xc001b200a0) Create stream I0321 22:10:47.108878 6 log.go:172] (0xc000f87a20) (0xc001b200a0) Stream added, broadcasting: 3 I0321 22:10:47.110112 6 log.go:172] (0xc000f87a20) Reply frame received for 3 I0321 22:10:47.110155 6 log.go:172] (0xc000f87a20) (0xc001f72780) Create stream I0321 22:10:47.110165 6 log.go:172] (0xc000f87a20) (0xc001f72780) Stream added, broadcasting: 5 I0321 22:10:47.110956 6 log.go:172] (0xc000f87a20) Reply frame received for 5 I0321 22:10:47.157789 6 log.go:172] (0xc000f87a20) Data frame received for 3 I0321 22:10:47.157831 6 log.go:172] (0xc001b200a0) (3) Data frame handling I0321 22:10:47.157860 6 log.go:172] (0xc001b200a0) (3) Data frame sent I0321 22:10:47.157874 6 log.go:172] (0xc000f87a20) Data frame received for 3 I0321 22:10:47.157885 6 log.go:172] (0xc001b200a0) (3) Data frame handling I0321 22:10:47.157912 6 log.go:172] (0xc000f87a20) Data frame received for 5 I0321 22:10:47.157938 6 log.go:172] (0xc001f72780) (5) Data frame handling I0321 22:10:47.159540 6 log.go:172] (0xc000f87a20) Data frame received for 1 I0321 22:10:47.159568 6 log.go:172] (0xc0011a92c0) (1) Data frame handling I0321 22:10:47.159590 6 log.go:172] (0xc0011a92c0) (1) Data frame sent I0321 22:10:47.159610 6 log.go:172] (0xc000f87a20) (0xc0011a92c0) Stream removed, broadcasting: 1 I0321 22:10:47.159635 6 log.go:172] (0xc000f87a20) Go away received I0321 22:10:47.159787 6 log.go:172] (0xc000f87a20) (0xc0011a92c0) Stream removed, broadcasting: 1 I0321 22:10:47.159812 6 log.go:172] (0xc000f87a20) (0xc001b200a0) Stream removed, broadcasting: 3 I0321 22:10:47.159832 6 log.go:172] (0xc000f87a20) (0xc001f72780) Stream removed, broadcasting: 5 Mar 21 22:10:47.159: INFO: Exec stderr: "" Mar 21 22:10:47.159: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1803 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:10:47.159: INFO: >>> kubeConfig: /root/.kube/config I0321 22:10:47.195493 6 log.go:172] (0xc002b396b0) (0xc001b208c0) Create stream I0321 22:10:47.195523 6 log.go:172] (0xc002b396b0) (0xc001b208c0) Stream added, broadcasting: 1 I0321 22:10:47.197995 6 log.go:172] (0xc002b396b0) Reply frame received for 1 I0321 22:10:47.198035 6 log.go:172] (0xc002b396b0) (0xc001b20aa0) Create stream I0321 22:10:47.198052 6 log.go:172] (0xc002b396b0) (0xc001b20aa0) Stream added, broadcasting: 3 I0321 22:10:47.199052 6 log.go:172] (0xc002b396b0) Reply frame received for 3 I0321 22:10:47.199090 6 log.go:172] (0xc002b396b0) (0xc0024f4b40) Create stream I0321 22:10:47.199105 6 log.go:172] (0xc002b396b0) (0xc0024f4b40) Stream added, broadcasting: 5 I0321 22:10:47.200026 6 log.go:172] (0xc002b396b0) Reply frame received for 5 I0321 22:10:47.250811 6 log.go:172] (0xc002b396b0) Data frame received for 3 I0321 22:10:47.250855 6 log.go:172] (0xc001b20aa0) (3) Data frame handling I0321 22:10:47.250869 6 log.go:172] (0xc001b20aa0) (3) Data frame sent I0321 22:10:47.250883 6 log.go:172] (0xc002b396b0) Data frame received for 3 I0321 22:10:47.250896 6 log.go:172] (0xc001b20aa0) (3) Data frame handling I0321 22:10:47.250924 6 log.go:172] (0xc002b396b0) Data frame received for 5 I0321 22:10:47.250940 6 log.go:172] (0xc0024f4b40) (5) Data frame handling I0321 22:10:47.252369 6 log.go:172] (0xc002b396b0) Data frame received for 1 I0321 22:10:47.252390 6 log.go:172] (0xc001b208c0) (1) Data frame handling I0321 22:10:47.252406 6 log.go:172] (0xc001b208c0) (1) Data frame sent I0321 22:10:47.252423 6 log.go:172] (0xc002b396b0) (0xc001b208c0) Stream removed, broadcasting: 1 I0321 22:10:47.252499 6 log.go:172] (0xc002b396b0) Go away received I0321 22:10:47.252538 6 log.go:172] (0xc002b396b0) (0xc001b208c0) Stream removed, broadcasting: 1 I0321 22:10:47.252567 6 log.go:172] (0xc002b396b0) (0xc001b20aa0) Stream removed, broadcasting: 3 I0321 22:10:47.252596 6 log.go:172] (0xc002b396b0) (0xc0024f4b40) Stream removed, broadcasting: 5 Mar 21 22:10:47.252: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:10:47.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1803" for this suite. • [SLOW TEST:11.253 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3780,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:10:47.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0321 22:11:27.891259 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 22:11:27.891: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:11:27.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-107" for this suite. • [SLOW TEST:40.636 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":238,"skipped":3790,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:11:27.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-a6d82cf4-3b16-4c1e-9121-8fdb8de01ff2 STEP: Creating configMap with name cm-test-opt-upd-3d70c361-9ce0-4062-a660-819bd6eb81b8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a6d82cf4-3b16-4c1e-9121-8fdb8de01ff2 STEP: Updating configmap cm-test-opt-upd-3d70c361-9ce0-4062-a660-819bd6eb81b8 STEP: Creating configMap with name cm-test-opt-create-7e0175da-19f1-4d17-98c4-7005c6d1502a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:13:05.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1084" for this suite. • [SLOW TEST:97.625 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:13:05.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:13:05.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6129" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":240,"skipped":3833,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:13:05.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 21 22:13:10.268: INFO: Successfully updated pod "annotationupdated6fde636-74da-4440-8e33-fea8a0538cdf" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:13:12.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7497" for this suite. • [SLOW TEST:6.656 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:13:12.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 21 22:13:12.370: INFO: Waiting up to 5m0s for pod "pod-df1abfb6-9edc-4855-91d0-711d9eb9fdf6" in namespace "emptydir-6182" to be "success or failure" Mar 21 22:13:12.386: INFO: Pod "pod-df1abfb6-9edc-4855-91d0-711d9eb9fdf6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.014255ms Mar 21 22:13:14.484: INFO: Pod "pod-df1abfb6-9edc-4855-91d0-711d9eb9fdf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114362454s Mar 21 22:13:16.488: INFO: Pod "pod-df1abfb6-9edc-4855-91d0-711d9eb9fdf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11811956s STEP: Saw pod success Mar 21 22:13:16.488: INFO: Pod "pod-df1abfb6-9edc-4855-91d0-711d9eb9fdf6" satisfied condition "success or failure" Mar 21 22:13:16.491: INFO: Trying to get logs from node jerma-worker pod pod-df1abfb6-9edc-4855-91d0-711d9eb9fdf6 container test-container: STEP: delete the pod Mar 21 22:13:16.568: INFO: Waiting for pod pod-df1abfb6-9edc-4855-91d0-711d9eb9fdf6 to disappear Mar 21 22:13:16.584: INFO: Pod pod-df1abfb6-9edc-4855-91d0-711d9eb9fdf6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:13:16.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6182" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3902,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:13:16.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 21 22:13:16.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8bcae0c-7c72-4db4-8838-cad7a4e06ec2" in namespace "downward-api-7464" to be "success or failure" Mar 21 22:13:16.668: INFO: Pod "downwardapi-volume-b8bcae0c-7c72-4db4-8838-cad7a4e06ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.438002ms Mar 21 22:13:18.693: INFO: Pod "downwardapi-volume-b8bcae0c-7c72-4db4-8838-cad7a4e06ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027769779s Mar 21 22:13:20.696: INFO: Pod "downwardapi-volume-b8bcae0c-7c72-4db4-8838-cad7a4e06ec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031075283s STEP: Saw pod success Mar 21 22:13:20.696: INFO: Pod "downwardapi-volume-b8bcae0c-7c72-4db4-8838-cad7a4e06ec2" satisfied condition "success or failure" Mar 21 22:13:20.699: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b8bcae0c-7c72-4db4-8838-cad7a4e06ec2 container client-container: STEP: delete the pod Mar 21 22:13:20.716: INFO: Waiting for pod downwardapi-volume-b8bcae0c-7c72-4db4-8838-cad7a4e06ec2 to disappear Mar 21 22:13:20.720: INFO: Pod downwardapi-volume-b8bcae0c-7c72-4db4-8838-cad7a4e06ec2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:13:20.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7464" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3907,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:13:20.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-plrz STEP: Creating a pod to test atomic-volume-subpath Mar 21 22:13:20.891: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-plrz" in namespace "subpath-3915" to be "success or failure" Mar 21 22:13:20.901: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.804272ms Mar 21 22:13:22.905: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014272592s Mar 21 22:13:24.909: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 4.018074057s Mar 21 22:13:26.913: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 6.022103163s Mar 21 22:13:28.917: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 8.026299189s Mar 21 22:13:30.922: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 10.030627319s Mar 21 22:13:32.926: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 12.034629243s Mar 21 22:13:34.937: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 14.046003529s Mar 21 22:13:36.942: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 16.050336094s Mar 21 22:13:38.946: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 18.054694385s Mar 21 22:13:40.956: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 20.064465205s Mar 21 22:13:42.993: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Running", Reason="", readiness=true. Elapsed: 22.101859814s Mar 21 22:13:44.998: INFO: Pod "pod-subpath-test-downwardapi-plrz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.106554212s STEP: Saw pod success Mar 21 22:13:44.998: INFO: Pod "pod-subpath-test-downwardapi-plrz" satisfied condition "success or failure" Mar 21 22:13:45.001: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-plrz container test-container-subpath-downwardapi-plrz: STEP: delete the pod Mar 21 22:13:45.024: INFO: Waiting for pod pod-subpath-test-downwardapi-plrz to disappear Mar 21 22:13:45.064: INFO: Pod pod-subpath-test-downwardapi-plrz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-plrz Mar 21 22:13:45.064: INFO: Deleting pod "pod-subpath-test-downwardapi-plrz" in namespace "subpath-3915" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:13:45.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3915" for this suite. • [SLOW TEST:24.348 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":244,"skipped":3927,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:13:45.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 21 22:13:45.161: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 21 22:13:55.444: INFO: >>> kubeConfig: /root/.kube/config Mar 21 22:13:58.321: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:14:08.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1528" for this suite. • [SLOW TEST:23.579 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":245,"skipped":3933,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:14:08.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-dnlm STEP: Creating a pod to test atomic-volume-subpath Mar 21 22:14:08.767: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dnlm" in namespace "subpath-13" to be "success or failure" Mar 21 22:14:08.770: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.656459ms Mar 21 22:14:11.049: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281723269s Mar 21 22:14:13.052: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 4.284804292s Mar 21 22:14:15.056: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 6.289266308s Mar 21 22:14:17.060: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 8.293353289s Mar 21 22:14:19.167: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 10.400227792s Mar 21 22:14:21.171: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 12.40432734s Mar 21 22:14:23.175: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 14.407902902s Mar 21 22:14:25.178: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 16.410869788s Mar 21 22:14:27.181: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 18.414189914s Mar 21 22:14:29.299: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 20.532604586s Mar 21 22:14:31.303: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 22.536640537s Mar 21 22:14:33.308: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Running", Reason="", readiness=true. Elapsed: 24.541047693s Mar 21 22:14:35.312: INFO: Pod "pod-subpath-test-configmap-dnlm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.545257806s STEP: Saw pod success Mar 21 22:14:35.312: INFO: Pod "pod-subpath-test-configmap-dnlm" satisfied condition "success or failure" Mar 21 22:14:35.315: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-dnlm container test-container-subpath-configmap-dnlm: STEP: delete the pod Mar 21 22:14:35.359: INFO: Waiting for pod pod-subpath-test-configmap-dnlm to disappear Mar 21 22:14:35.370: INFO: Pod pod-subpath-test-configmap-dnlm no longer exists STEP: Deleting pod pod-subpath-test-configmap-dnlm Mar 21 22:14:35.370: INFO: Deleting pod "pod-subpath-test-configmap-dnlm" in namespace "subpath-13" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:14:35.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-13" for this suite. • [SLOW TEST:26.698 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":246,"skipped":3957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:14:35.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 21 22:14:35.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5865' Mar 21 22:14:40.505: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 22:14:40.505: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 21 22:14:42.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5865' Mar 21 22:14:42.679: INFO: stderr: "" Mar 21 22:14:42.679: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:14:42.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5865" for this suite. • [SLOW TEST:7.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1590 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":247,"skipped":3981,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:14:42.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:14:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2158" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":248,"skipped":4022,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:14:42.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 21 22:14:47.647: INFO: Successfully updated pod "pod-update-6f64b14d-5757-4f9b-a5ab-689c2ba815da" STEP: verifying the updated pod is in kubernetes Mar 21 22:14:47.682: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:14:47.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1944" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4035,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:14:47.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 21 22:14:55.831: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 22:14:55.860: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 22:14:57.860: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 22:14:57.865: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 22:14:59.860: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 22:14:59.864: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 22:15:01.860: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 22:15:01.864: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 22:15:03.860: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 22:15:03.864: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 22:15:05.860: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 22:15:05.864: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 22:15:07.860: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 22:15:07.864: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 22:15:09.860: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 22:15:09.864: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:15:09.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3131" for this suite. • [SLOW TEST:22.191 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4055,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:15:09.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:15:26.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4804" for this suite. • [SLOW TEST:16.298 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":251,"skipped":4070,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:15:26.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 21 22:15:26.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8406' Mar 21 22:15:26.654: INFO: stderr: "" Mar 21 22:15:26.654: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 22:15:26.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8406' Mar 21 22:15:26.781: INFO: stderr: "" Mar 21 22:15:26.781: INFO: stdout: "update-demo-nautilus-gp8z9 update-demo-nautilus-j98r5 " Mar 21 22:15:26.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gp8z9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:26.883: INFO: stderr: "" Mar 21 22:15:26.883: INFO: stdout: "" Mar 21 22:15:26.883: INFO: update-demo-nautilus-gp8z9 is created but not running Mar 21 22:15:31.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8406' Mar 21 22:15:31.980: INFO: stderr: "" Mar 21 22:15:31.980: INFO: stdout: "update-demo-nautilus-gp8z9 update-demo-nautilus-j98r5 " Mar 21 22:15:31.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gp8z9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:32.066: INFO: stderr: "" Mar 21 22:15:32.066: INFO: stdout: "true" Mar 21 22:15:32.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gp8z9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:32.146: INFO: stderr: "" Mar 21 22:15:32.146: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:15:32.146: INFO: validating pod update-demo-nautilus-gp8z9 Mar 21 22:15:32.150: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:15:32.150: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:15:32.150: INFO: update-demo-nautilus-gp8z9 is verified up and running Mar 21 22:15:32.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j98r5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:32.236: INFO: stderr: "" Mar 21 22:15:32.236: INFO: stdout: "true" Mar 21 22:15:32.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j98r5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:32.341: INFO: stderr: "" Mar 21 22:15:32.341: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:15:32.341: INFO: validating pod update-demo-nautilus-j98r5 Mar 21 22:15:32.346: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:15:32.346: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:15:32.346: INFO: update-demo-nautilus-j98r5 is verified up and running STEP: rolling-update to new replication controller Mar 21 22:15:32.348: INFO: scanned /root for discovery docs: Mar 21 22:15:32.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8406' Mar 21 22:15:54.863: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 21 22:15:54.863: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 22:15:54.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8406' Mar 21 22:15:54.962: INFO: stderr: "" Mar 21 22:15:54.962: INFO: stdout: "update-demo-kitten-ntbnd update-demo-kitten-w4b6g " Mar 21 22:15:54.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ntbnd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:55.046: INFO: stderr: "" Mar 21 22:15:55.046: INFO: stdout: "true" Mar 21 22:15:55.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ntbnd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:55.132: INFO: stderr: "" Mar 21 22:15:55.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 21 22:15:55.132: INFO: validating pod update-demo-kitten-ntbnd Mar 21 22:15:55.137: INFO: got data: { "image": "kitten.jpg" } Mar 21 22:15:55.137: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 21 22:15:55.137: INFO: update-demo-kitten-ntbnd is verified up and running Mar 21 22:15:55.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w4b6g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:55.223: INFO: stderr: "" Mar 21 22:15:55.223: INFO: stdout: "true" Mar 21 22:15:55.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w4b6g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8406' Mar 21 22:15:55.306: INFO: stderr: "" Mar 21 22:15:55.306: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 21 22:15:55.306: INFO: validating pod update-demo-kitten-w4b6g Mar 21 22:15:55.310: INFO: got data: { "image": "kitten.jpg" } Mar 21 22:15:55.310: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 21 22:15:55.310: INFO: update-demo-kitten-w4b6g is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:15:55.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8406" for this suite. • [SLOW TEST:29.137 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":252,"skipped":4078,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:15:55.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-58a3d8fa-5e4f-4024-aebc-35fd1f1632cb STEP: Creating a pod to test consume secrets Mar 21 22:15:55.410: INFO: Waiting up to 5m0s for pod "pod-secrets-d0934f0b-6805-4e2e-b699-23a45af8c91b" in namespace "secrets-1404" to be "success or failure" Mar 21 22:15:55.413: INFO: Pod "pod-secrets-d0934f0b-6805-4e2e-b699-23a45af8c91b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.482377ms Mar 21 22:15:57.420: INFO: Pod "pod-secrets-d0934f0b-6805-4e2e-b699-23a45af8c91b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009821342s Mar 21 22:15:59.424: INFO: Pod "pod-secrets-d0934f0b-6805-4e2e-b699-23a45af8c91b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013876456s STEP: Saw pod success Mar 21 22:15:59.424: INFO: Pod "pod-secrets-d0934f0b-6805-4e2e-b699-23a45af8c91b" satisfied condition "success or failure" Mar 21 22:15:59.427: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d0934f0b-6805-4e2e-b699-23a45af8c91b container secret-volume-test: STEP: delete the pod Mar 21 22:15:59.445: INFO: Waiting for pod pod-secrets-d0934f0b-6805-4e2e-b699-23a45af8c91b to disappear Mar 21 22:15:59.460: INFO: Pod pod-secrets-d0934f0b-6805-4e2e-b699-23a45af8c91b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:15:59.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1404" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4079,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:15:59.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 21 22:16:03.638: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:16:03.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7189" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4096,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:16:03.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b3bfb8ec-6d44-4389-a463-12e1bc618fea STEP: Creating a pod to test consume secrets Mar 21 22:16:03.960: INFO: Waiting up to 5m0s for pod "pod-secrets-f6e9b4ff-7691-42bc-9f6f-5708abb2ad89" in namespace "secrets-5889" to be "success or failure" Mar 21 22:16:03.983: INFO: Pod "pod-secrets-f6e9b4ff-7691-42bc-9f6f-5708abb2ad89": Phase="Pending", Reason="", readiness=false. Elapsed: 23.05264ms Mar 21 22:16:05.987: INFO: Pod "pod-secrets-f6e9b4ff-7691-42bc-9f6f-5708abb2ad89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027180325s Mar 21 22:16:07.992: INFO: Pod "pod-secrets-f6e9b4ff-7691-42bc-9f6f-5708abb2ad89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031700827s STEP: Saw pod success Mar 21 22:16:07.992: INFO: Pod "pod-secrets-f6e9b4ff-7691-42bc-9f6f-5708abb2ad89" satisfied condition "success or failure" Mar 21 22:16:07.995: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-f6e9b4ff-7691-42bc-9f6f-5708abb2ad89 container secret-volume-test: STEP: delete the pod Mar 21 22:16:08.041: INFO: Waiting for pod pod-secrets-f6e9b4ff-7691-42bc-9f6f-5708abb2ad89 to disappear Mar 21 22:16:08.061: INFO: Pod pod-secrets-f6e9b4ff-7691-42bc-9f6f-5708abb2ad89 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:16:08.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5889" for this suite. STEP: Destroying namespace "secret-namespace-4672" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:16:08.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 21 22:16:08.135: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:16:19.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8724" for this suite. • [SLOW TEST:11.186 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4134,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:16:19.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-836/configmap-test-f7e5e428-85a0-4a57-9cce-ec07445b6b56 STEP: Creating a pod to test consume configMaps Mar 21 22:16:19.327: INFO: Waiting up to 5m0s for pod "pod-configmaps-daa8ff5c-9836-4fa8-9bef-67b913a61d7d" in namespace "configmap-836" to be "success or failure" Mar 21 22:16:19.331: INFO: Pod "pod-configmaps-daa8ff5c-9836-4fa8-9bef-67b913a61d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.508537ms Mar 21 22:16:21.334: INFO: Pod "pod-configmaps-daa8ff5c-9836-4fa8-9bef-67b913a61d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007293195s Mar 21 22:16:23.339: INFO: Pod "pod-configmaps-daa8ff5c-9836-4fa8-9bef-67b913a61d7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011527391s STEP: Saw pod success Mar 21 22:16:23.339: INFO: Pod "pod-configmaps-daa8ff5c-9836-4fa8-9bef-67b913a61d7d" satisfied condition "success or failure" Mar 21 22:16:23.342: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-daa8ff5c-9836-4fa8-9bef-67b913a61d7d container env-test: STEP: delete the pod Mar 21 22:16:23.416: INFO: Waiting for pod pod-configmaps-daa8ff5c-9836-4fa8-9bef-67b913a61d7d to disappear Mar 21 22:16:23.426: INFO: Pod pod-configmaps-daa8ff5c-9836-4fa8-9bef-67b913a61d7d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:16:23.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-836" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4147,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:16:23.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:16:23.487: INFO: Creating deployment "webserver-deployment" Mar 21 22:16:23.510: INFO: Waiting for observed generation 1 Mar 21 22:16:25.544: INFO: Waiting for all required pods to come up Mar 21 22:16:25.552: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 21 22:16:33.560: INFO: Waiting for deployment "webserver-deployment" to complete Mar 21 22:16:33.566: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 21 22:16:33.571: INFO: Updating deployment webserver-deployment Mar 21 22:16:33.571: INFO: Waiting for observed generation 2 Mar 21 22:16:35.581: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 21 22:16:35.584: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 21 22:16:35.587: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 21 22:16:35.594: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 21 22:16:35.594: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 21 22:16:35.596: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 21 22:16:35.600: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 21 22:16:35.600: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 21 22:16:35.605: INFO: Updating deployment webserver-deployment Mar 21 22:16:35.605: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 21 22:16:35.774: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 21 22:16:35.966: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 21 22:16:36.273: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9725 /apis/apps/v1/namespaces/deployment-9725/deployments/webserver-deployment f3cbbf4f-e417-43a4-8f44-72af24c690fc 1664833 3 2020-03-21 22:16:23 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004685668 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-21 22:16:33 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-21 22:16:35 +0000 UTC,LastTransitionTime:2020-03-21 22:16:35 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 21 22:16:36.387: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9725 /apis/apps/v1/namespaces/deployment-9725/replicasets/webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 1664880 3 2020-03-21 22:16:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f3cbbf4f-e417-43a4-8f44-72af24c690fc 0xc004685b37 0xc004685b38}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004685ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 22:16:36.387: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 21 22:16:36.387: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9725 /apis/apps/v1/namespaces/deployment-9725/replicasets/webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 1664877 3 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f3cbbf4f-e417-43a4-8f44-72af24c690fc 0xc004685a77 0xc004685a78}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004685ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 21 22:16:36.412: INFO: Pod "webserver-deployment-595b5b9587-2km8c" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2km8c webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-2km8c 06dfcef1-55df-461b-ad7a-11b27293c2a8 1664746 0 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc004663c57 0xc004663c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.19,StartTime:2020-03-21 22:16:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:16:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://66bc5ea0a1be167d7fa8c0e7e883e8095a33a2cf5cd49191c38c4a3491883b37,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.412: INFO: Pod "webserver-deployment-595b5b9587-2m5wg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2m5wg webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-2m5wg fd3265c5-a83f-4ade-beaa-46fc1e2e502e 1664743 0 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc004663dd7 0xc004663dd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.20,StartTime:2020-03-21 22:16:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:16:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://26572879f8292307585ba90bb621a36b557e3647d01888d79acbce4beb6cdbc3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.412: INFO: Pod "webserver-deployment-595b5b9587-2pgk5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2pgk5 webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-2pgk5 018ac8e1-2bd5-4beb-b46c-463509801406 1664722 0 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc004663f57 0xc004663f58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.18,StartTime:2020-03-21 22:16:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:16:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3b92971fed1e0863aa7a56d09d297abd52865f9a4306f4ddf18af0974f122928,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.412: INFO: Pod "webserver-deployment-595b5b9587-6cpcq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6cpcq webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-6cpcq ee2fef4a-2c4d-4687-8367-00f234116ff5 1664883 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a00d7 0xc0045a00d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-21 22:16:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.412: INFO: Pod "webserver-deployment-595b5b9587-6qh7s" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6qh7s webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-6qh7s 7d257339-e094-44df-a1d4-0f0aee9de2e4 1664882 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0237 0xc0045a0238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-21 22:16:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.412: INFO: Pod "webserver-deployment-595b5b9587-7xkxj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7xkxj webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-7xkxj 5e1d62e0-9584-4426-ba72-e4861dbc8425 1664872 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0397 0xc0045a0398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.412: INFO: Pod "webserver-deployment-595b5b9587-8rcg6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8rcg6 webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-8rcg6 be6a4358-15a6-4383-b699-fe776732cb0c 1664876 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a04b7 0xc0045a04b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-21 22:16:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.413: INFO: Pod "webserver-deployment-595b5b9587-g8l8j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g8l8j webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-g8l8j cc5564d3-b541-486d-b63a-8a904a76868c 1664697 0 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0617 0xc0045a0618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.17,StartTime:2020-03-21 22:16:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:16:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d15e540f55d1f8d34723067b16ddffb607e019f3d0794a33cf0d4a18b1e21e05,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.413: INFO: Pod "webserver-deployment-595b5b9587-jdf87" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jdf87 webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-jdf87 f9a54c2f-9407-48c0-a04e-c88853f0c530 1664862 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0797 0xc0045a0798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.413: INFO: Pod "webserver-deployment-595b5b9587-kdkm5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kdkm5 webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-kdkm5 ec313086-147e-4642-b78c-6d22b1bde054 1664848 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a08b7 0xc0045a08b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.413: INFO: Pod "webserver-deployment-595b5b9587-mnct6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mnct6 webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-mnct6 291050ae-d980-4963-a834-9fa7b69df41b 1664866 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a09d7 0xc0045a09d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.413: INFO: Pod "webserver-deployment-595b5b9587-mw6qv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mw6qv webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-mw6qv e26d82df-4398-40c4-9820-1d6addb350a5 1664708 0 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0af7 0xc0045a0af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.69,StartTime:2020-03-21 22:16:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:16:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f015278c7313a33da22194ac22013b29d20299d2fb922d1cc80e233a7d751484,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.413: INFO: Pod "webserver-deployment-595b5b9587-nbvg8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nbvg8 webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-nbvg8 d7ffaeb6-d5c1-4ad7-a606-c56e9a86c7d5 1664859 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0c87 0xc0045a0c88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.413: INFO: Pod "webserver-deployment-595b5b9587-ncghm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ncghm webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-ncghm 48c1ab6e-c5c6-4493-9eea-db9f621649c2 1664870 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0da7 0xc0045a0da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.413: INFO: Pod "webserver-deployment-595b5b9587-tx8p6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tx8p6 webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-tx8p6 fb7e1925-78ce-4256-b259-3e32adb3e0c8 1664865 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0ec7 0xc0045a0ec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.414: INFO: Pod "webserver-deployment-595b5b9587-v57rp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v57rp webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-v57rp f74485b4-9fbf-40b2-8a09-4fde60692981 1664717 0 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a0fe7 0xc0045a0fe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.70,StartTime:2020-03-21 22:16:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:16:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://02668554ec9a7d9fea855ba1ea4e72f3dca1175228286842b2cb3510a35bcb34,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.414: INFO: Pod "webserver-deployment-595b5b9587-vblp7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vblp7 webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-vblp7 29f74988-2b11-4313-aaec-672b7c19a83a 1664682 0 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a1167 0xc0045a1168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.68,StartTime:2020-03-21 22:16:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:16:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a3696509953d46f64c3522c658181c207dbd9b2d64a00b1c11437522d232f72,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.414: INFO: Pod "webserver-deployment-595b5b9587-vs69d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vs69d webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-vs69d fe5d3cbf-3435-426c-935a-a733cfd901a6 1664845 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a12e7 0xc0045a12e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.414: INFO: Pod "webserver-deployment-595b5b9587-vxknv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vxknv webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-vxknv 4d8d26be-7581-403e-ba8b-3f7270fff3f0 1664755 0 2020-03-21 22:16:23 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a1407 0xc0045a1408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.72,StartTime:2020-03-21 22:16:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-21 22:16:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fb4f6ecf051a94752319e01e16acb0f5b7334f6dbbcd219e82d6eb12a102ef87,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.414: INFO: Pod "webserver-deployment-595b5b9587-xdbqf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xdbqf webserver-deployment-595b5b9587- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-595b5b9587-xdbqf e2e99d9d-d56e-4451-8e26-bb941da00ac4 1664858 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44c85469-8e54-4cab-b1f0-e415de51a668 0xc0045a1587 0xc0045a1588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.414: INFO: Pod "webserver-deployment-c7997dcc8-4kc57" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4kc57 webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-4kc57 49cf974c-a924-44a3-a128-b67afacc6388 1664860 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0045a16a7 0xc0045a16a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.414: INFO: Pod "webserver-deployment-c7997dcc8-56b26" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-56b26 webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-56b26 d61a1569-f14b-4c3e-b520-ad157c933d0d 1664791 0 2020-03-21 22:16:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0045a17d7 0xc0045a17d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-21 22:16:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.414: INFO: Pod "webserver-deployment-c7997dcc8-9t2vw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9t2vw webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-9t2vw 6e4bea6e-3c2e-40ea-b00d-58352595c583 1664871 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0045a1957 0xc0045a1958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.415: INFO: Pod "webserver-deployment-c7997dcc8-bvtxh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bvtxh webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-bvtxh a598c0c7-3aeb-49f0-8c0b-49ebaf4282a9 1664815 0 2020-03-21 22:16:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0045a1a87 0xc0045a1a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-21 22:16:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.415: INFO: Pod "webserver-deployment-c7997dcc8-gwcct" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gwcct webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-gwcct 38cd8311-7fab-4260-8b49-c919409fb81e 1664795 0 2020-03-21 22:16:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0045a1c07 0xc0045a1c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-21 22:16:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.415: INFO: Pod "webserver-deployment-c7997dcc8-kj2pt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kj2pt webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-kj2pt 30b1a307-5f89-484c-b57b-86c991db3139 1664843 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0045a1da7 0xc0045a1da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.415: INFO: Pod "webserver-deployment-c7997dcc8-pp8cl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pp8cl webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-pp8cl 78335afc-4a1a-4008-8919-d4c5b57fe864 1664868 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0045a1ed7 0xc0045a1ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.415: INFO: Pod "webserver-deployment-c7997dcc8-s6678" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s6678 webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-s6678 458bcec8-48e4-4dd2-8da9-70a7574e7573 1664854 0 2020-03-21 22:16:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0043b0007 0xc0043b0008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.415: INFO: Pod "webserver-deployment-c7997dcc8-s6xz4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s6xz4 webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-s6xz4 6d1a7698-de4a-464e-93f1-dc0b809174b5 1664810 0 2020-03-21 22:16:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0043b0137 0xc0043b0138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-21 22:16:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.415: INFO: Pod "webserver-deployment-c7997dcc8-sznft" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sznft webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-sznft 3dcc1280-ce6c-4c02-9bad-98517c74ad91 1664813 0 2020-03-21 22:16:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0043b02c7 0xc0043b02c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-21 22:16:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.415: INFO: Pod "webserver-deployment-c7997dcc8-w2658" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w2658 webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-w2658 958f9078-fadd-46c3-ba34-be5f7b394419 1664875 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0043b0457 0xc0043b0458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.416: INFO: Pod "webserver-deployment-c7997dcc8-w2sw6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w2sw6 webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-w2sw6 a9701f89-2756-4f50-8fe8-f99a0ec2c36f 1664869 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0043b05a7 0xc0043b05a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 22:16:36.416: INFO: Pod "webserver-deployment-c7997dcc8-w6kcn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w6kcn webserver-deployment-c7997dcc8- deployment-9725 /api/v1/namespaces/deployment-9725/pods/webserver-deployment-c7997dcc8-w6kcn 65e061c5-1cba-4ff6-ba45-997d9bbdcd86 1664867 0 2020-03-21 22:16:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 1b9c00e9-7e6c-4cec-a07d-a71f8566b34f 0xc0043b0727 0xc0043b0728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7rzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7rzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7rzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-21 22:16:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:16:36.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9725" for this suite. • [SLOW TEST:13.134 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":258,"skipped":4153,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:16:36.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 21 22:16:36.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 21 22:16:36.969: INFO: stderr: "" Mar 21 22:16:36.969: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:16:36.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5313" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":259,"skipped":4162,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:16:37.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:17:21.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3028" for this suite. • [SLOW TEST:44.007 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4179,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:17:21.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 22:17:21.704: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 22:17:23.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425841, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425841, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425841, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425841, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 22:17:26.747: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:17:26.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2830-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:17:27.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8442" for this suite. STEP: Destroying namespace "webhook-8442-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.433 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":261,"skipped":4182,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:17:28.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b7fac93f-b676-4548-b865-1ab9b189b7ad STEP: Creating a pod to test consume secrets Mar 21 22:17:28.570: INFO: Waiting up to 5m0s for pod "pod-secrets-df88635c-f189-4b6d-9d08-93f7170b7894" in namespace "secrets-2935" to be "success or failure" Mar 21 22:17:28.620: INFO: Pod "pod-secrets-df88635c-f189-4b6d-9d08-93f7170b7894": Phase="Pending", Reason="", readiness=false. Elapsed: 49.506813ms Mar 21 22:17:30.623: INFO: Pod "pod-secrets-df88635c-f189-4b6d-9d08-93f7170b7894": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052971993s Mar 21 22:17:32.627: INFO: Pod "pod-secrets-df88635c-f189-4b6d-9d08-93f7170b7894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057375305s STEP: Saw pod success Mar 21 22:17:32.628: INFO: Pod "pod-secrets-df88635c-f189-4b6d-9d08-93f7170b7894" satisfied condition "success or failure" Mar 21 22:17:32.631: INFO: Trying to get logs from node jerma-worker pod pod-secrets-df88635c-f189-4b6d-9d08-93f7170b7894 container secret-volume-test: STEP: delete the pod Mar 21 22:17:32.821: INFO: Waiting for pod pod-secrets-df88635c-f189-4b6d-9d08-93f7170b7894 to disappear Mar 21 22:17:32.837: INFO: Pod pod-secrets-df88635c-f189-4b6d-9d08-93f7170b7894 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:17:32.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2935" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:17:32.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 21 22:17:33.113: INFO: Waiting up to 5m0s for pod "pod-6d46fad9-ba2a-461b-bc12-0d22af615e6a" in namespace "emptydir-2121" to be "success or failure" Mar 21 22:17:33.124: INFO: Pod "pod-6d46fad9-ba2a-461b-bc12-0d22af615e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.434003ms Mar 21 22:17:35.128: INFO: Pod "pod-6d46fad9-ba2a-461b-bc12-0d22af615e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014928079s Mar 21 22:17:37.132: INFO: Pod "pod-6d46fad9-ba2a-461b-bc12-0d22af615e6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018871483s STEP: Saw pod success Mar 21 22:17:37.132: INFO: Pod "pod-6d46fad9-ba2a-461b-bc12-0d22af615e6a" satisfied condition "success or failure" Mar 21 22:17:37.135: INFO: Trying to get logs from node jerma-worker2 pod pod-6d46fad9-ba2a-461b-bc12-0d22af615e6a container test-container: STEP: delete the pod Mar 21 22:17:37.167: INFO: Waiting for pod pod-6d46fad9-ba2a-461b-bc12-0d22af615e6a to disappear Mar 21 22:17:37.194: INFO: Pod pod-6d46fad9-ba2a-461b-bc12-0d22af615e6a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:17:37.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2121" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4236,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:17:37.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 21 22:17:41.777: INFO: Successfully updated pod "adopt-release-7sqlj" STEP: Checking that the Job readopts the Pod Mar 21 22:17:41.777: INFO: Waiting up to 15m0s for pod "adopt-release-7sqlj" in namespace "job-8734" to be "adopted" Mar 21 22:17:41.817: INFO: Pod "adopt-release-7sqlj": Phase="Running", Reason="", readiness=true. Elapsed: 39.830496ms Mar 21 22:17:43.926: INFO: Pod "adopt-release-7sqlj": Phase="Running", Reason="", readiness=true. Elapsed: 2.149400204s Mar 21 22:17:43.926: INFO: Pod "adopt-release-7sqlj" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 21 22:17:44.718: INFO: Successfully updated pod "adopt-release-7sqlj" STEP: Checking that the Job releases the Pod Mar 21 22:17:44.718: INFO: Waiting up to 15m0s for pod "adopt-release-7sqlj" in namespace "job-8734" to be "released" Mar 21 22:17:44.783: INFO: Pod "adopt-release-7sqlj": Phase="Running", Reason="", readiness=true. Elapsed: 65.323917ms Mar 21 22:17:44.783: INFO: Pod "adopt-release-7sqlj" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:17:44.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8734" for this suite. • [SLOW TEST:7.671 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":264,"skipped":4250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:17:44.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 21 22:17:53.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 22:17:53.007: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 22:17:55.007: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 22:17:55.021: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 22:17:57.007: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 22:17:57.012: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:17:57.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3585" for this suite. • [SLOW TEST:12.152 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4297,"failed":0} SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:17:57.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-4a29efd4-a17f-47dc-946f-07b562b2a353 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:17:57.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3508" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":266,"skipped":4299,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:17:57.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5212 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5212 STEP: creating replication controller externalsvc in namespace services-5212 I0321 22:17:57.258426 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5212, replica count: 2 I0321 22:18:00.308874 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 22:18:03.309234 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 21 22:18:03.343: INFO: Creating new exec pod Mar 21 22:18:07.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5212 execpodkpwbj -- /bin/sh -x -c nslookup clusterip-service' Mar 21 22:18:07.613: INFO: stderr: "I0321 22:18:07.530181 4105 log.go:172] (0xc00093c000) (0xc000952000) Create stream\nI0321 22:18:07.530246 4105 log.go:172] (0xc00093c000) (0xc000952000) Stream added, broadcasting: 1\nI0321 22:18:07.532316 4105 log.go:172] (0xc00093c000) Reply frame received for 1\nI0321 22:18:07.532366 4105 log.go:172] (0xc00093c000) (0xc000751b80) Create stream\nI0321 22:18:07.532386 4105 log.go:172] (0xc00093c000) (0xc000751b80) Stream added, broadcasting: 3\nI0321 22:18:07.533621 4105 log.go:172] (0xc00093c000) Reply frame received for 3\nI0321 22:18:07.533655 4105 log.go:172] (0xc00093c000) (0xc000751d60) Create stream\nI0321 22:18:07.533670 4105 log.go:172] (0xc00093c000) (0xc000751d60) Stream added, broadcasting: 5\nI0321 22:18:07.534631 4105 log.go:172] (0xc00093c000) Reply frame received for 5\nI0321 22:18:07.596823 4105 log.go:172] (0xc00093c000) Data frame received for 5\nI0321 22:18:07.596859 4105 log.go:172] (0xc000751d60) (5) Data frame handling\nI0321 22:18:07.596886 4105 log.go:172] (0xc000751d60) (5) Data frame sent\n+ nslookup clusterip-service\nI0321 22:18:07.605233 4105 log.go:172] (0xc00093c000) Data frame received for 3\nI0321 22:18:07.605252 4105 log.go:172] (0xc000751b80) (3) Data frame handling\nI0321 22:18:07.605262 4105 log.go:172] (0xc000751b80) (3) Data frame sent\nI0321 22:18:07.606422 4105 log.go:172] (0xc00093c000) Data frame received for 3\nI0321 22:18:07.606453 4105 log.go:172] (0xc000751b80) (3) Data frame handling\nI0321 22:18:07.606475 4105 log.go:172] (0xc000751b80) (3) Data frame sent\nI0321 22:18:07.606672 4105 log.go:172] (0xc00093c000) Data frame received for 3\nI0321 22:18:07.606694 4105 log.go:172] (0xc000751b80) (3) Data frame handling\nI0321 22:18:07.606828 4105 log.go:172] (0xc00093c000) Data frame received for 5\nI0321 22:18:07.606897 4105 log.go:172] (0xc000751d60) (5) Data frame handling\nI0321 22:18:07.608269 4105 log.go:172] (0xc00093c000) Data frame received for 1\nI0321 22:18:07.608302 4105 log.go:172] (0xc000952000) (1) Data frame handling\nI0321 22:18:07.608326 4105 log.go:172] (0xc000952000) (1) Data frame sent\nI0321 22:18:07.608351 4105 log.go:172] (0xc00093c000) (0xc000952000) Stream removed, broadcasting: 1\nI0321 22:18:07.608403 4105 log.go:172] (0xc00093c000) Go away received\nI0321 22:18:07.608748 4105 log.go:172] (0xc00093c000) (0xc000952000) Stream removed, broadcasting: 1\nI0321 22:18:07.608768 4105 log.go:172] (0xc00093c000) (0xc000751b80) Stream removed, broadcasting: 3\nI0321 22:18:07.608777 4105 log.go:172] (0xc00093c000) (0xc000751d60) Stream removed, broadcasting: 5\n" Mar 21 22:18:07.613: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5212.svc.cluster.local\tcanonical name = externalsvc.services-5212.svc.cluster.local.\nName:\texternalsvc.services-5212.svc.cluster.local\nAddress: 10.110.55.184\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5212, will wait for the garbage collector to delete the pods Mar 21 22:18:07.673: INFO: Deleting ReplicationController externalsvc took: 6.615151ms Mar 21 22:18:07.973: INFO: Terminating ReplicationController externalsvc pods took: 300.257822ms Mar 21 22:18:19.589: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:18:19.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5212" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:22.509 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":267,"skipped":4306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:18:19.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-f7604921-06dc-444c-a5da-a56a73594baf in namespace container-probe-2948 Mar 21 22:18:25.722: INFO: Started pod liveness-f7604921-06dc-444c-a5da-a56a73594baf in namespace container-probe-2948 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 22:18:25.725: INFO: Initial restart count of pod liveness-f7604921-06dc-444c-a5da-a56a73594baf is 0 Mar 21 22:18:49.792: INFO: Restart count of pod container-probe-2948/liveness-f7604921-06dc-444c-a5da-a56a73594baf is now 1 (24.067042334s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:18:49.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2948" for this suite. • [SLOW TEST:30.217 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4343,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:18:49.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 21 22:18:49.906: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 21 22:18:52.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4864 create -f -' Mar 21 22:18:56.023: INFO: stderr: "" Mar 21 22:18:56.023: INFO: stdout: "e2e-test-crd-publish-openapi-9308-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 21 22:18:56.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4864 delete e2e-test-crd-publish-openapi-9308-crds test-cr' Mar 21 22:18:56.119: INFO: stderr: "" Mar 21 22:18:56.119: INFO: stdout: "e2e-test-crd-publish-openapi-9308-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 21 22:18:56.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4864 apply -f -' Mar 21 22:18:56.343: INFO: stderr: "" Mar 21 22:18:56.343: INFO: stdout: "e2e-test-crd-publish-openapi-9308-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 21 22:18:56.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4864 delete e2e-test-crd-publish-openapi-9308-crds test-cr' Mar 21 22:18:56.454: INFO: stderr: "" Mar 21 22:18:56.454: INFO: stdout: "e2e-test-crd-publish-openapi-9308-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 21 22:18:56.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9308-crds' Mar 21 22:18:56.696: INFO: stderr: "" Mar 21 22:18:56.696: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9308-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:18:59.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4864" for this suite. • [SLOW TEST:9.740 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":269,"skipped":4346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:18:59.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 22:18:59.997: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 22:19:02.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425940, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425940, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425940, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720425939, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 22:19:05.053: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:19:05.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9881" for this suite. STEP: Destroying namespace "webhook-9881-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.688 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":270,"skipped":4390,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:19:05.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3850 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 22:19:05.391: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 21 22:19:27.548: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.43:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3850 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:19:27.548: INFO: >>> kubeConfig: /root/.kube/config I0321 22:19:27.577648 6 log.go:172] (0xc0069f1550) (0xc000d69720) Create stream I0321 22:19:27.577673 6 log.go:172] (0xc0069f1550) (0xc000d69720) Stream added, broadcasting: 1 I0321 22:19:27.579193 6 log.go:172] (0xc0069f1550) Reply frame received for 1 I0321 22:19:27.579228 6 log.go:172] (0xc0069f1550) (0xc0024f5f40) Create stream I0321 22:19:27.579242 6 log.go:172] (0xc0069f1550) (0xc0024f5f40) Stream added, broadcasting: 3 I0321 22:19:27.580153 6 log.go:172] (0xc0069f1550) Reply frame received for 3 I0321 22:19:27.580186 6 log.go:172] (0xc0069f1550) (0xc0011a8280) Create stream I0321 22:19:27.580202 6 log.go:172] (0xc0069f1550) (0xc0011a8280) Stream added, broadcasting: 5 I0321 22:19:27.581099 6 log.go:172] (0xc0069f1550) Reply frame received for 5 I0321 22:19:27.643461 6 log.go:172] (0xc0069f1550) Data frame received for 3 I0321 22:19:27.643485 6 log.go:172] (0xc0024f5f40) (3) Data frame handling I0321 22:19:27.643495 6 log.go:172] (0xc0024f5f40) (3) Data frame sent I0321 22:19:27.643501 6 log.go:172] (0xc0069f1550) Data frame received for 3 I0321 22:19:27.643508 6 log.go:172] (0xc0024f5f40) (3) Data frame handling I0321 22:19:27.643594 6 log.go:172] (0xc0069f1550) Data frame received for 5 I0321 22:19:27.643656 6 log.go:172] (0xc0011a8280) (5) Data frame handling I0321 22:19:27.646570 6 log.go:172] (0xc0069f1550) Data frame received for 1 I0321 22:19:27.646592 6 log.go:172] (0xc000d69720) (1) Data frame handling I0321 22:19:27.646620 6 log.go:172] (0xc000d69720) (1) Data frame sent I0321 22:19:27.646636 6 log.go:172] (0xc0069f1550) (0xc000d69720) Stream removed, broadcasting: 1 I0321 22:19:27.646652 6 log.go:172] (0xc0069f1550) Go away received I0321 22:19:27.646828 6 log.go:172] (0xc0069f1550) (0xc000d69720) Stream removed, broadcasting: 1 I0321 22:19:27.646866 6 log.go:172] (0xc0069f1550) (0xc0024f5f40) Stream removed, broadcasting: 3 I0321 22:19:27.646895 6 log.go:172] (0xc0069f1550) (0xc0011a8280) Stream removed, broadcasting: 5 Mar 21 22:19:27.646: INFO: Found all expected endpoints: [netserver-0] Mar 21 22:19:27.649: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.93:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3850 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 22:19:27.649: INFO: >>> kubeConfig: /root/.kube/config I0321 22:19:27.684621 6 log.go:172] (0xc00704e370) (0xc0011a8dc0) Create stream I0321 22:19:27.684656 6 log.go:172] (0xc00704e370) (0xc0011a8dc0) Stream added, broadcasting: 1 I0321 22:19:27.687368 6 log.go:172] (0xc00704e370) Reply frame received for 1 I0321 22:19:27.687417 6 log.go:172] (0xc00704e370) (0xc0011a8fa0) Create stream I0321 22:19:27.687428 6 log.go:172] (0xc00704e370) (0xc0011a8fa0) Stream added, broadcasting: 3 I0321 22:19:27.690087 6 log.go:172] (0xc00704e370) Reply frame received for 3 I0321 22:19:27.690147 6 log.go:172] (0xc00704e370) (0xc001dee280) Create stream I0321 22:19:27.690166 6 log.go:172] (0xc00704e370) (0xc001dee280) Stream added, broadcasting: 5 I0321 22:19:27.691319 6 log.go:172] (0xc00704e370) Reply frame received for 5 I0321 22:19:27.760332 6 log.go:172] (0xc00704e370) Data frame received for 5 I0321 22:19:27.760366 6 log.go:172] (0xc001dee280) (5) Data frame handling I0321 22:19:27.760397 6 log.go:172] (0xc00704e370) Data frame received for 3 I0321 22:19:27.760414 6 log.go:172] (0xc0011a8fa0) (3) Data frame handling I0321 22:19:27.760431 6 log.go:172] (0xc0011a8fa0) (3) Data frame sent I0321 22:19:27.760446 6 log.go:172] (0xc00704e370) Data frame received for 3 I0321 22:19:27.760458 6 log.go:172] (0xc0011a8fa0) (3) Data frame handling I0321 22:19:27.761967 6 log.go:172] (0xc00704e370) Data frame received for 1 I0321 22:19:27.762002 6 log.go:172] (0xc0011a8dc0) (1) Data frame handling I0321 22:19:27.762020 6 log.go:172] (0xc0011a8dc0) (1) Data frame sent I0321 22:19:27.762049 6 log.go:172] (0xc00704e370) (0xc0011a8dc0) Stream removed, broadcasting: 1 I0321 22:19:27.762069 6 log.go:172] (0xc00704e370) Go away received I0321 22:19:27.762135 6 log.go:172] (0xc00704e370) (0xc0011a8dc0) Stream removed, broadcasting: 1 I0321 22:19:27.762149 6 log.go:172] (0xc00704e370) (0xc0011a8fa0) Stream removed, broadcasting: 3 I0321 22:19:27.762155 6 log.go:172] (0xc00704e370) (0xc001dee280) Stream removed, broadcasting: 5 Mar 21 22:19:27.762: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:19:27.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3850" for this suite. • [SLOW TEST:22.511 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4396,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:19:27.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 21 22:19:27.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4915' Mar 21 22:19:28.198: INFO: stderr: "" Mar 21 22:19:28.198: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 22:19:28.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4915' Mar 21 22:19:28.317: INFO: stderr: "" Mar 21 22:19:28.317: INFO: stdout: "update-demo-nautilus-b2wpw update-demo-nautilus-v9nv5 " Mar 21 22:19:28.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2wpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4915' Mar 21 22:19:28.413: INFO: stderr: "" Mar 21 22:19:28.413: INFO: stdout: "" Mar 21 22:19:28.413: INFO: update-demo-nautilus-b2wpw is created but not running Mar 21 22:19:33.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4915' Mar 21 22:19:33.537: INFO: stderr: "" Mar 21 22:19:33.537: INFO: stdout: "update-demo-nautilus-b2wpw update-demo-nautilus-v9nv5 " Mar 21 22:19:33.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2wpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4915' Mar 21 22:19:33.676: INFO: stderr: "" Mar 21 22:19:33.676: INFO: stdout: "true" Mar 21 22:19:33.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b2wpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4915' Mar 21 22:19:33.763: INFO: stderr: "" Mar 21 22:19:33.763: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:19:33.763: INFO: validating pod update-demo-nautilus-b2wpw Mar 21 22:19:33.767: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:19:33.767: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:19:33.767: INFO: update-demo-nautilus-b2wpw is verified up and running Mar 21 22:19:33.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9nv5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4915' Mar 21 22:19:33.853: INFO: stderr: "" Mar 21 22:19:33.853: INFO: stdout: "true" Mar 21 22:19:33.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9nv5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4915' Mar 21 22:19:33.945: INFO: stderr: "" Mar 21 22:19:33.945: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 22:19:33.945: INFO: validating pod update-demo-nautilus-v9nv5 Mar 21 22:19:33.948: INFO: got data: { "image": "nautilus.jpg" } Mar 21 22:19:33.948: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 22:19:33.948: INFO: update-demo-nautilus-v9nv5 is verified up and running STEP: using delete to clean up resources Mar 21 22:19:33.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4915' Mar 21 22:19:34.084: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 22:19:34.084: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 21 22:19:34.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4915' Mar 21 22:19:34.213: INFO: stderr: "No resources found in kubectl-4915 namespace.\n" Mar 21 22:19:34.213: INFO: stdout: "" Mar 21 22:19:34.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4915 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 22:19:34.299: INFO: stderr: "" Mar 21 22:19:34.299: INFO: stdout: "update-demo-nautilus-b2wpw\nupdate-demo-nautilus-v9nv5\n" Mar 21 22:19:34.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4915' Mar 21 22:19:34.891: INFO: stderr: "No resources found in kubectl-4915 namespace.\n" Mar 21 22:19:34.891: INFO: stdout: "" Mar 21 22:19:34.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4915 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 22:19:34.982: INFO: stderr: "" Mar 21 22:19:34.982: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:19:34.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4915" for this suite. • [SLOW TEST:7.220 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":272,"skipped":4404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:19:34.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:19:35.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8822" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":273,"skipped":4435,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:19:35.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-46d00f82-be0c-4322-8306-ed1396de72f3 in namespace container-probe-6733 Mar 21 22:19:39.559: INFO: Started pod busybox-46d00f82-be0c-4322-8306-ed1396de72f3 in namespace container-probe-6733 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 22:19:39.562: INFO: Initial restart count of pod busybox-46d00f82-be0c-4322-8306-ed1396de72f3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:23:40.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6733" for this suite. • [SLOW TEST:244.915 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:23:40.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:23:40.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9506" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":275,"skipped":4470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:23:40.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 22:23:41.517: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 22:23:43.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720426221, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720426221, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720426221, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720426221, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 22:23:46.601: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:23:46.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1107" for this suite. STEP: Destroying namespace "webhook-1107-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.630 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":276,"skipped":4517,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:23:47.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8329 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8329 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8329 Mar 21 22:23:47.138: INFO: Found 0 stateful pods, waiting for 1 Mar 21 22:23:57.142: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 21 22:23:57.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8329 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 22:23:57.409: INFO: stderr: "I0321 22:23:57.280278 4505 log.go:172] (0xc000104f20) (0xc00083a280) Create stream\nI0321 22:23:57.280332 4505 log.go:172] (0xc000104f20) (0xc00083a280) Stream added, broadcasting: 1\nI0321 22:23:57.283228 4505 log.go:172] (0xc000104f20) Reply frame received for 1\nI0321 22:23:57.283281 4505 log.go:172] (0xc000104f20) (0xc000161360) Create stream\nI0321 22:23:57.283306 4505 log.go:172] (0xc000104f20) (0xc000161360) Stream added, broadcasting: 3\nI0321 22:23:57.284405 4505 log.go:172] (0xc000104f20) Reply frame received for 3\nI0321 22:23:57.284437 4505 log.go:172] (0xc000104f20) (0xc0005db9a0) Create stream\nI0321 22:23:57.284451 4505 log.go:172] (0xc000104f20) (0xc0005db9a0) Stream added, broadcasting: 5\nI0321 22:23:57.285635 4505 log.go:172] (0xc000104f20) Reply frame received for 5\nI0321 22:23:57.377596 4505 log.go:172] (0xc000104f20) Data frame received for 5\nI0321 22:23:57.377631 4505 log.go:172] (0xc0005db9a0) (5) Data frame handling\nI0321 22:23:57.377656 4505 log.go:172] (0xc0005db9a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 22:23:57.403571 4505 log.go:172] (0xc000104f20) Data frame received for 3\nI0321 22:23:57.403602 4505 log.go:172] (0xc000161360) (3) Data frame handling\nI0321 22:23:57.403624 4505 log.go:172] (0xc000161360) (3) Data frame sent\nI0321 22:23:57.403646 4505 log.go:172] (0xc000104f20) Data frame received for 3\nI0321 22:23:57.403661 4505 log.go:172] (0xc000161360) (3) Data frame handling\nI0321 22:23:57.403835 4505 log.go:172] (0xc000104f20) Data frame received for 5\nI0321 22:23:57.403868 4505 log.go:172] (0xc0005db9a0) (5) Data frame handling\nI0321 22:23:57.405675 4505 log.go:172] (0xc000104f20) Data frame received for 1\nI0321 22:23:57.405688 4505 log.go:172] (0xc00083a280) (1) Data frame handling\nI0321 22:23:57.405694 4505 log.go:172] (0xc00083a280) (1) Data frame sent\nI0321 22:23:57.405701 4505 log.go:172] (0xc000104f20) (0xc00083a280) Stream removed, broadcasting: 1\nI0321 22:23:57.405708 4505 log.go:172] (0xc000104f20) Go away received\nI0321 22:23:57.406235 4505 log.go:172] (0xc000104f20) (0xc00083a280) Stream removed, broadcasting: 1\nI0321 22:23:57.406261 4505 log.go:172] (0xc000104f20) (0xc000161360) Stream removed, broadcasting: 3\nI0321 22:23:57.406274 4505 log.go:172] (0xc000104f20) (0xc0005db9a0) Stream removed, broadcasting: 5\n" Mar 21 22:23:57.409: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 22:23:57.409: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 22:23:57.413: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 21 22:24:07.417: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 22:24:07.418: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 22:24:07.429: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999576s Mar 21 22:24:08.434: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997659849s Mar 21 22:24:09.438: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992594941s Mar 21 22:24:10.443: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988466782s Mar 21 22:24:11.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983384834s Mar 21 22:24:12.451: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.979311368s Mar 21 22:24:13.456: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974940996s Mar 21 22:24:14.469: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.970030883s Mar 21 22:24:15.474: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.957185214s Mar 21 22:24:16.478: INFO: Verifying statefulset ss doesn't scale past 1 for another 952.793556ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8329 Mar 21 22:24:17.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 22:24:17.676: INFO: stderr: "I0321 22:24:17.618198 4528 log.go:172] (0xc000593080) (0xc000a54000) Create stream\nI0321 22:24:17.618267 4528 log.go:172] (0xc000593080) (0xc000a54000) Stream added, broadcasting: 1\nI0321 22:24:17.620962 4528 log.go:172] (0xc000593080) Reply frame received for 1\nI0321 22:24:17.620997 4528 log.go:172] (0xc000593080) (0xc000a3c000) Create stream\nI0321 22:24:17.621007 4528 log.go:172] (0xc000593080) (0xc000a3c000) Stream added, broadcasting: 3\nI0321 22:24:17.622214 4528 log.go:172] (0xc000593080) Reply frame received for 3\nI0321 22:24:17.622266 4528 log.go:172] (0xc000593080) (0xc000a540a0) Create stream\nI0321 22:24:17.622278 4528 log.go:172] (0xc000593080) (0xc000a540a0) Stream added, broadcasting: 5\nI0321 22:24:17.623390 4528 log.go:172] (0xc000593080) Reply frame received for 5\nI0321 22:24:17.669958 4528 log.go:172] (0xc000593080) Data frame received for 5\nI0321 22:24:17.670003 4528 log.go:172] (0xc000a540a0) (5) Data frame handling\nI0321 22:24:17.670017 4528 log.go:172] (0xc000a540a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0321 22:24:17.670049 4528 log.go:172] (0xc000593080) Data frame received for 3\nI0321 22:24:17.670076 4528 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0321 22:24:17.670092 4528 log.go:172] (0xc000a3c000) (3) Data frame sent\nI0321 22:24:17.670106 4528 log.go:172] (0xc000593080) Data frame received for 3\nI0321 22:24:17.670115 4528 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0321 22:24:17.670128 4528 log.go:172] (0xc000593080) Data frame received for 5\nI0321 22:24:17.670137 4528 log.go:172] (0xc000a540a0) (5) Data frame handling\nI0321 22:24:17.671656 4528 log.go:172] (0xc000593080) Data frame received for 1\nI0321 22:24:17.671678 4528 log.go:172] (0xc000a54000) (1) Data frame handling\nI0321 22:24:17.671690 4528 log.go:172] (0xc000a54000) (1) Data frame sent\nI0321 22:24:17.671824 4528 log.go:172] (0xc000593080) (0xc000a54000) Stream removed, broadcasting: 1\nI0321 22:24:17.671851 4528 log.go:172] (0xc000593080) Go away received\nI0321 22:24:17.672215 4528 log.go:172] (0xc000593080) (0xc000a54000) Stream removed, broadcasting: 1\nI0321 22:24:17.672240 4528 log.go:172] (0xc000593080) (0xc000a3c000) Stream removed, broadcasting: 3\nI0321 22:24:17.672262 4528 log.go:172] (0xc000593080) (0xc000a540a0) Stream removed, broadcasting: 5\n" Mar 21 22:24:17.676: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 22:24:17.676: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 22:24:17.681: INFO: Found 1 stateful pods, waiting for 3 Mar 21 22:24:27.686: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 22:24:27.686: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 22:24:27.686: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 21 22:24:27.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8329 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 22:24:27.890: INFO: stderr: "I0321 22:24:27.818169 4550 log.go:172] (0xc000aac000) (0xc000ae20a0) Create stream\nI0321 22:24:27.818235 4550 log.go:172] (0xc000aac000) (0xc000ae20a0) Stream added, broadcasting: 1\nI0321 22:24:27.820631 4550 log.go:172] (0xc000aac000) Reply frame received for 1\nI0321 22:24:27.820667 4550 log.go:172] (0xc000aac000) (0xc000a62500) Create stream\nI0321 22:24:27.820686 4550 log.go:172] (0xc000aac000) (0xc000a62500) Stream added, broadcasting: 3\nI0321 22:24:27.821725 4550 log.go:172] (0xc000aac000) Reply frame received for 3\nI0321 22:24:27.821750 4550 log.go:172] (0xc000aac000) (0xc000a625a0) Create stream\nI0321 22:24:27.821756 4550 log.go:172] (0xc000aac000) (0xc000a625a0) Stream added, broadcasting: 5\nI0321 22:24:27.822651 4550 log.go:172] (0xc000aac000) Reply frame received for 5\nI0321 22:24:27.885085 4550 log.go:172] (0xc000aac000) Data frame received for 5\nI0321 22:24:27.885284 4550 log.go:172] (0xc000a625a0) (5) Data frame handling\nI0321 22:24:27.885327 4550 log.go:172] (0xc000aac000) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 22:24:27.885354 4550 log.go:172] (0xc000a62500) (3) Data frame handling\nI0321 22:24:27.885373 4550 log.go:172] (0xc000a62500) (3) Data frame sent\nI0321 22:24:27.885387 4550 log.go:172] (0xc000aac000) Data frame received for 3\nI0321 22:24:27.885398 4550 log.go:172] (0xc000a62500) (3) Data frame handling\nI0321 22:24:27.885459 4550 log.go:172] (0xc000a625a0) (5) Data frame sent\nI0321 22:24:27.885516 4550 log.go:172] (0xc000aac000) Data frame received for 5\nI0321 22:24:27.885531 4550 log.go:172] (0xc000a625a0) (5) Data frame handling\nI0321 22:24:27.886571 4550 log.go:172] (0xc000aac000) Data frame received for 1\nI0321 22:24:27.886583 4550 log.go:172] (0xc000ae20a0) (1) Data frame handling\nI0321 22:24:27.886589 4550 log.go:172] (0xc000ae20a0) (1) Data frame sent\nI0321 22:24:27.886710 4550 log.go:172] (0xc000aac000) (0xc000ae20a0) Stream removed, broadcasting: 1\nI0321 22:24:27.886815 4550 log.go:172] (0xc000aac000) Go away received\nI0321 22:24:27.887321 4550 log.go:172] (0xc000aac000) (0xc000ae20a0) Stream removed, broadcasting: 1\nI0321 22:24:27.887345 4550 log.go:172] (0xc000aac000) (0xc000a62500) Stream removed, broadcasting: 3\nI0321 22:24:27.887360 4550 log.go:172] (0xc000aac000) (0xc000a625a0) Stream removed, broadcasting: 5\n" Mar 21 22:24:27.890: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 22:24:27.890: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 22:24:27.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8329 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 22:24:28.116: INFO: stderr: "I0321 22:24:28.009731 4570 log.go:172] (0xc000a28000) (0xc0008ba000) Create stream\nI0321 22:24:28.009783 4570 log.go:172] (0xc000a28000) (0xc0008ba000) Stream added, broadcasting: 1\nI0321 22:24:28.011937 4570 log.go:172] (0xc000a28000) Reply frame received for 1\nI0321 22:24:28.011971 4570 log.go:172] (0xc000a28000) (0xc0006f79a0) Create stream\nI0321 22:24:28.011982 4570 log.go:172] (0xc000a28000) (0xc0006f79a0) Stream added, broadcasting: 3\nI0321 22:24:28.012882 4570 log.go:172] (0xc000a28000) Reply frame received for 3\nI0321 22:24:28.012937 4570 log.go:172] (0xc000a28000) (0xc0008ba0a0) Create stream\nI0321 22:24:28.012958 4570 log.go:172] (0xc000a28000) (0xc0008ba0a0) Stream added, broadcasting: 5\nI0321 22:24:28.014078 4570 log.go:172] (0xc000a28000) Reply frame received for 5\nI0321 22:24:28.085283 4570 log.go:172] (0xc000a28000) Data frame received for 5\nI0321 22:24:28.085303 4570 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0321 22:24:28.085312 4570 log.go:172] (0xc0008ba0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 22:24:28.108869 4570 log.go:172] (0xc000a28000) Data frame received for 3\nI0321 22:24:28.108903 4570 log.go:172] (0xc0006f79a0) (3) Data frame handling\nI0321 22:24:28.108924 4570 log.go:172] (0xc0006f79a0) (3) Data frame sent\nI0321 22:24:28.109068 4570 log.go:172] (0xc000a28000) Data frame received for 3\nI0321 22:24:28.109087 4570 log.go:172] (0xc0006f79a0) (3) Data frame handling\nI0321 22:24:28.109301 4570 log.go:172] (0xc000a28000) Data frame received for 5\nI0321 22:24:28.109326 4570 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0321 22:24:28.111609 4570 log.go:172] (0xc000a28000) Data frame received for 1\nI0321 22:24:28.111640 4570 log.go:172] (0xc0008ba000) (1) Data frame handling\nI0321 22:24:28.111668 4570 log.go:172] (0xc0008ba000) (1) Data frame sent\nI0321 22:24:28.111690 4570 log.go:172] (0xc000a28000) (0xc0008ba000) Stream removed, broadcasting: 1\nI0321 22:24:28.111708 4570 log.go:172] (0xc000a28000) Go away received\nI0321 22:24:28.112234 4570 log.go:172] (0xc000a28000) (0xc0008ba000) Stream removed, broadcasting: 1\nI0321 22:24:28.112266 4570 log.go:172] (0xc000a28000) (0xc0006f79a0) Stream removed, broadcasting: 3\nI0321 22:24:28.112279 4570 log.go:172] (0xc000a28000) (0xc0008ba0a0) Stream removed, broadcasting: 5\n" Mar 21 22:24:28.117: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 22:24:28.117: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 22:24:28.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8329 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 22:24:28.354: INFO: stderr: "I0321 22:24:28.248504 4592 log.go:172] (0xc000a92c60) (0xc000a606e0) Create stream\nI0321 22:24:28.248570 4592 log.go:172] (0xc000a92c60) (0xc000a606e0) Stream added, broadcasting: 1\nI0321 22:24:28.253227 4592 log.go:172] (0xc000a92c60) Reply frame received for 1\nI0321 22:24:28.253269 4592 log.go:172] (0xc000a92c60) (0xc0006ddc20) Create stream\nI0321 22:24:28.253279 4592 log.go:172] (0xc000a92c60) (0xc0006ddc20) Stream added, broadcasting: 3\nI0321 22:24:28.254179 4592 log.go:172] (0xc000a92c60) Reply frame received for 3\nI0321 22:24:28.254228 4592 log.go:172] (0xc000a92c60) (0xc00068c820) Create stream\nI0321 22:24:28.254250 4592 log.go:172] (0xc000a92c60) (0xc00068c820) Stream added, broadcasting: 5\nI0321 22:24:28.255176 4592 log.go:172] (0xc000a92c60) Reply frame received for 5\nI0321 22:24:28.311490 4592 log.go:172] (0xc000a92c60) Data frame received for 5\nI0321 22:24:28.311519 4592 log.go:172] (0xc00068c820) (5) Data frame handling\nI0321 22:24:28.311538 4592 log.go:172] (0xc00068c820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0321 22:24:28.348482 4592 log.go:172] (0xc000a92c60) Data frame received for 3\nI0321 22:24:28.348514 4592 log.go:172] (0xc0006ddc20) (3) Data frame handling\nI0321 22:24:28.348537 4592 log.go:172] (0xc0006ddc20) (3) Data frame sent\nI0321 22:24:28.348553 4592 log.go:172] (0xc000a92c60) Data frame received for 3\nI0321 22:24:28.348564 4592 log.go:172] (0xc0006ddc20) (3) Data frame handling\nI0321 22:24:28.348614 4592 log.go:172] (0xc000a92c60) Data frame received for 5\nI0321 22:24:28.348649 4592 log.go:172] (0xc00068c820) (5) Data frame handling\nI0321 22:24:28.350327 4592 log.go:172] (0xc000a92c60) Data frame received for 1\nI0321 22:24:28.350342 4592 log.go:172] (0xc000a606e0) (1) Data frame handling\nI0321 22:24:28.350350 4592 log.go:172] (0xc000a606e0) (1) Data frame sent\nI0321 22:24:28.350359 4592 log.go:172] (0xc000a92c60) (0xc000a606e0) Stream removed, broadcasting: 1\nI0321 22:24:28.350417 4592 log.go:172] (0xc000a92c60) Go away received\nI0321 22:24:28.350649 4592 log.go:172] (0xc000a92c60) (0xc000a606e0) Stream removed, broadcasting: 1\nI0321 22:24:28.350661 4592 log.go:172] (0xc000a92c60) (0xc0006ddc20) Stream removed, broadcasting: 3\nI0321 22:24:28.350667 4592 log.go:172] (0xc000a92c60) (0xc00068c820) Stream removed, broadcasting: 5\n" Mar 21 22:24:28.354: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 22:24:28.354: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 22:24:28.354: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 22:24:28.386: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 21 22:24:38.394: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 22:24:38.394: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 21 22:24:38.394: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 21 22:24:38.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999439s Mar 21 22:24:39.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994572608s Mar 21 22:24:40.417: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989532818s Mar 21 22:24:41.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98408312s Mar 21 22:24:42.427: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979234118s Mar 21 22:24:43.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973881548s Mar 21 22:24:44.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968601311s Mar 21 22:24:45.444: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962073655s Mar 21 22:24:46.449: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.956727774s Mar 21 22:24:47.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.151564ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8329 Mar 21 22:24:48.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 22:24:48.673: INFO: stderr: "I0321 22:24:48.587026 4612 log.go:172] (0xc000012dc0) (0xc000988140) Create stream\nI0321 22:24:48.587089 4612 log.go:172] (0xc000012dc0) (0xc000988140) Stream added, broadcasting: 1\nI0321 22:24:48.589531 4612 log.go:172] (0xc000012dc0) Reply frame received for 1\nI0321 22:24:48.589578 4612 log.go:172] (0xc000012dc0) (0xc000988280) Create stream\nI0321 22:24:48.589587 4612 log.go:172] (0xc000012dc0) (0xc000988280) Stream added, broadcasting: 3\nI0321 22:24:48.590481 4612 log.go:172] (0xc000012dc0) Reply frame received for 3\nI0321 22:24:48.590507 4612 log.go:172] (0xc000012dc0) (0xc000988320) Create stream\nI0321 22:24:48.590516 4612 log.go:172] (0xc000012dc0) (0xc000988320) Stream added, broadcasting: 5\nI0321 22:24:48.591302 4612 log.go:172] (0xc000012dc0) Reply frame received for 5\nI0321 22:24:48.665440 4612 log.go:172] (0xc000012dc0) Data frame received for 5\nI0321 22:24:48.665475 4612 log.go:172] (0xc000988320) (5) Data frame handling\nI0321 22:24:48.665505 4612 log.go:172] (0xc000988320) (5) Data frame sent\nI0321 22:24:48.665521 4612 log.go:172] (0xc000012dc0) Data frame received for 5\nI0321 22:24:48.665532 4612 log.go:172] (0xc000988320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0321 22:24:48.665691 4612 log.go:172] (0xc000012dc0) Data frame received for 3\nI0321 22:24:48.665723 4612 log.go:172] (0xc000988280) (3) Data frame handling\nI0321 22:24:48.665753 4612 log.go:172] (0xc000988280) (3) Data frame sent\nI0321 22:24:48.665768 4612 log.go:172] (0xc000012dc0) Data frame received for 3\nI0321 22:24:48.665779 4612 log.go:172] (0xc000988280) (3) Data frame handling\nI0321 22:24:48.667794 4612 log.go:172] (0xc000012dc0) Data frame received for 1\nI0321 22:24:48.667821 4612 log.go:172] (0xc000988140) (1) Data frame handling\nI0321 22:24:48.667845 4612 log.go:172] (0xc000988140) (1) Data frame sent\nI0321 22:24:48.667882 4612 log.go:172] (0xc000012dc0) (0xc000988140) Stream removed, broadcasting: 1\nI0321 22:24:48.667921 4612 log.go:172] (0xc000012dc0) Go away received\nI0321 22:24:48.668489 4612 log.go:172] (0xc000012dc0) (0xc000988140) Stream removed, broadcasting: 1\nI0321 22:24:48.668523 4612 log.go:172] (0xc000012dc0) (0xc000988280) Stream removed, broadcasting: 3\nI0321 22:24:48.668546 4612 log.go:172] (0xc000012dc0) (0xc000988320) Stream removed, broadcasting: 5\n" Mar 21 22:24:48.673: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 22:24:48.673: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 22:24:48.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8329 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 22:24:48.865: INFO: stderr: "I0321 22:24:48.802502 4635 log.go:172] (0xc000577130) (0xc000651a40) Create stream\nI0321 22:24:48.802544 4635 log.go:172] (0xc000577130) (0xc000651a40) Stream added, broadcasting: 1\nI0321 22:24:48.804747 4635 log.go:172] (0xc000577130) Reply frame received for 1\nI0321 22:24:48.804795 4635 log.go:172] (0xc000577130) (0xc00090a000) Create stream\nI0321 22:24:48.804813 4635 log.go:172] (0xc000577130) (0xc00090a000) Stream added, broadcasting: 3\nI0321 22:24:48.805774 4635 log.go:172] (0xc000577130) Reply frame received for 3\nI0321 22:24:48.805828 4635 log.go:172] (0xc000577130) (0xc0008a8000) Create stream\nI0321 22:24:48.805850 4635 log.go:172] (0xc000577130) (0xc0008a8000) Stream added, broadcasting: 5\nI0321 22:24:48.806797 4635 log.go:172] (0xc000577130) Reply frame received for 5\nI0321 22:24:48.860997 4635 log.go:172] (0xc000577130) Data frame received for 5\nI0321 22:24:48.861020 4635 log.go:172] (0xc0008a8000) (5) Data frame handling\nI0321 22:24:48.861029 4635 log.go:172] (0xc0008a8000) (5) Data frame sent\nI0321 22:24:48.861035 4635 log.go:172] (0xc000577130) Data frame received for 5\nI0321 22:24:48.861041 4635 log.go:172] (0xc0008a8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0321 22:24:48.861061 4635 log.go:172] (0xc000577130) Data frame received for 3\nI0321 22:24:48.861068 4635 log.go:172] (0xc00090a000) (3) Data frame handling\nI0321 22:24:48.861074 4635 log.go:172] (0xc00090a000) (3) Data frame sent\nI0321 22:24:48.861081 4635 log.go:172] (0xc000577130) Data frame received for 3\nI0321 22:24:48.861088 4635 log.go:172] (0xc00090a000) (3) Data frame handling\nI0321 22:24:48.862558 4635 log.go:172] (0xc000577130) Data frame received for 1\nI0321 22:24:48.862573 4635 log.go:172] (0xc000651a40) (1) Data frame handling\nI0321 22:24:48.862583 4635 log.go:172] (0xc000651a40) (1) Data frame sent\nI0321 22:24:48.862592 4635 log.go:172] (0xc000577130) (0xc000651a40) Stream removed, broadcasting: 1\nI0321 22:24:48.862763 4635 log.go:172] (0xc000577130) Go away received\nI0321 22:24:48.862817 4635 log.go:172] (0xc000577130) (0xc000651a40) Stream removed, broadcasting: 1\nI0321 22:24:48.862827 4635 log.go:172] (0xc000577130) (0xc00090a000) Stream removed, broadcasting: 3\nI0321 22:24:48.862834 4635 log.go:172] (0xc000577130) (0xc0008a8000) Stream removed, broadcasting: 5\n" Mar 21 22:24:48.865: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 22:24:48.865: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 22:24:48.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8329 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 22:24:49.060: INFO: stderr: "I0321 22:24:48.994606 4657 log.go:172] (0xc000a20420) (0xc000a4c6e0) Create stream\nI0321 22:24:48.994679 4657 log.go:172] (0xc000a20420) (0xc000a4c6e0) Stream added, broadcasting: 1\nI0321 22:24:49.000402 4657 log.go:172] (0xc000a20420) Reply frame received for 1\nI0321 22:24:49.000439 4657 log.go:172] (0xc000a20420) (0xc000684640) Create stream\nI0321 22:24:49.000450 4657 log.go:172] (0xc000a20420) (0xc000684640) Stream added, broadcasting: 3\nI0321 22:24:49.001338 4657 log.go:172] (0xc000a20420) Reply frame received for 3\nI0321 22:24:49.001376 4657 log.go:172] (0xc000a20420) (0xc00044f400) Create stream\nI0321 22:24:49.001389 4657 log.go:172] (0xc000a20420) (0xc00044f400) Stream added, broadcasting: 5\nI0321 22:24:49.002065 4657 log.go:172] (0xc000a20420) Reply frame received for 5\nI0321 22:24:49.054507 4657 log.go:172] (0xc000a20420) Data frame received for 5\nI0321 22:24:49.054558 4657 log.go:172] (0xc000a20420) Data frame received for 3\nI0321 22:24:49.054600 4657 log.go:172] (0xc000684640) (3) Data frame handling\nI0321 22:24:49.054623 4657 log.go:172] (0xc000684640) (3) Data frame sent\nI0321 22:24:49.054640 4657 log.go:172] (0xc000a20420) Data frame received for 3\nI0321 22:24:49.054656 4657 log.go:172] (0xc000684640) (3) Data frame handling\nI0321 22:24:49.054737 4657 log.go:172] (0xc00044f400) (5) Data frame handling\nI0321 22:24:49.054819 4657 log.go:172] (0xc00044f400) (5) Data frame sent\nI0321 22:24:49.054850 4657 log.go:172] (0xc000a20420) Data frame received for 5\nI0321 22:24:49.054875 4657 log.go:172] (0xc00044f400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0321 22:24:49.056329 4657 log.go:172] (0xc000a20420) Data frame received for 1\nI0321 22:24:49.056351 4657 log.go:172] (0xc000a4c6e0) (1) Data frame handling\nI0321 22:24:49.056368 4657 log.go:172] (0xc000a4c6e0) (1) Data frame sent\nI0321 22:24:49.056378 4657 log.go:172] (0xc000a20420) (0xc000a4c6e0) Stream removed, broadcasting: 1\nI0321 22:24:49.056589 4657 log.go:172] (0xc000a20420) Go away received\nI0321 22:24:49.056621 4657 log.go:172] (0xc000a20420) (0xc000a4c6e0) Stream removed, broadcasting: 1\nI0321 22:24:49.056639 4657 log.go:172] (0xc000a20420) (0xc000684640) Stream removed, broadcasting: 3\nI0321 22:24:49.056647 4657 log.go:172] (0xc000a20420) (0xc00044f400) Stream removed, broadcasting: 5\n" Mar 21 22:24:49.060: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 22:24:49.060: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 22:24:49.060: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 21 22:25:09.074: INFO: Deleting all statefulset in ns statefulset-8329 Mar 21 22:25:09.077: INFO: Scaling statefulset ss to 0 Mar 21 22:25:09.085: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 22:25:09.087: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:25:09.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8329" for this suite. • [SLOW TEST:82.050 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":277,"skipped":4519,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 21 22:25:09.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 21 22:25:09.174: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix017890981/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 21 22:25:09.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3021" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":278,"skipped":4524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 21 22:25:09.253: INFO: Running AfterSuite actions on all nodes Mar 21 22:25:09.253: INFO: Running AfterSuite actions on node 1 Mar 21 22:25:09.253: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0} Ran 278 of 4843 Specs in 4679.390 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped PASS