I0120 21:09:13.634202 9 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0120 21:09:13.635400 9 e2e.go:109] Starting e2e run "8abc3cf8-1405-42c2-ac6c-f390de6c22b1" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579554551 - Will randomize all specs Will run 278 of 4814 specs Jan 20 21:09:13.697: INFO: >>> kubeConfig: /root/.kube/config Jan 20 21:09:13.702: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 20 21:09:13.738: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 20 21:09:13.811: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 20 21:09:13.811: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 20 21:09:13.811: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 20 21:09:13.828: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 20 21:09:13.829: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 20 21:09:13.829: INFO: e2e test version: v1.17.0 Jan 20 21:09:13.832: INFO: kube-apiserver version: v1.17.0 Jan 20 21:09:13.832: INFO: >>> kubeConfig: /root/.kube/config Jan 20 21:09:13.842: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:09:13.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 20 21:09:13.980: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 STEP: creating an pod Jan 20 21:09:13.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4126 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 20 21:09:16.198: INFO: stderr: "" Jan 20 21:09:16.198: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jan 20 21:09:16.199: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 20 21:09:16.199: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4126" to be "running and ready, or succeeded" Jan 20 21:09:16.203: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04652ms Jan 20 21:09:18.211: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011834992s Jan 20 21:09:20.218: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019092439s Jan 20 21:09:22.225: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02604313s Jan 20 21:09:24.235: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.035823597s Jan 20 21:09:24.235: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 20 21:09:24.235: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 20 21:09:24.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4126' Jan 20 21:09:24.479: INFO: stderr: "" Jan 20 21:09:24.479: INFO: stdout: "I0120 21:09:21.835635 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/6mb 497\nI0120 21:09:22.035807 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/fkt 250\nI0120 21:09:22.235816 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/h8w 354\nI0120 21:09:22.436090 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/tm9 259\nI0120 21:09:22.635899 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/b6l 397\nI0120 21:09:22.835879 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/mtc 424\nI0120 21:09:23.036036 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/qg5 492\nI0120 21:09:23.235937 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/wsm 538\nI0120 21:09:23.436086 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/qbzd 522\nI0120 21:09:23.636152 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/bl5s 347\nI0120 21:09:23.836300 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/rhkr 239\nI0120 21:09:24.035817 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/w6r9 519\nI0120 21:09:24.236078 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/j485 515\nI0120 21:09:24.435945 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/5zp9 594\n" STEP: limiting log lines Jan 20 21:09:24.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4126 --tail=1' Jan 20 21:09:24.671: INFO: stderr: "" Jan 20 21:09:24.671: INFO: stdout: "I0120 21:09:24.636577 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/8gp 359\n" Jan 20 21:09:24.671: INFO: got output "I0120 21:09:24.636577 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/8gp 359\n" STEP: limiting log bytes Jan 20 21:09:24.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4126 --limit-bytes=1' Jan 20 21:09:24.806: INFO: stderr: "" Jan 20 21:09:24.806: INFO: stdout: "I" Jan 20 21:09:24.806: INFO: got output "I" STEP: exposing timestamps Jan 20 21:09:24.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4126 --tail=1 --timestamps' Jan 20 21:09:24.950: INFO: stderr: "" Jan 20 21:09:24.950: INFO: stdout: "2020-01-20T21:09:24.83727429Z I0120 21:09:24.836552 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/nq9 293\n" Jan 20 21:09:24.950: INFO: got output "2020-01-20T21:09:24.83727429Z I0120 21:09:24.836552 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/nq9 293\n" STEP: restricting to a time range Jan 20 21:09:27.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4126 --since=1s' Jan 20 21:09:27.715: INFO: stderr: "" Jan 20 21:09:27.715: INFO: stdout: "I0120 21:09:26.835896 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/f7k4 350\nI0120 21:09:27.035882 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/h2l 441\nI0120 21:09:27.235880 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/tg6z 295\nI0120 21:09:27.435943 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/vszb 591\nI0120 21:09:27.635829 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/6df4 421\n" Jan 20 21:09:27.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4126 --since=24h' Jan 20 21:09:27.965: INFO: stderr: "" Jan 20 21:09:27.965: INFO: stdout: "I0120 21:09:21.835635 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/6mb 497\nI0120 21:09:22.035807 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/fkt 250\nI0120 21:09:22.235816 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/h8w 354\nI0120 21:09:22.436090 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/tm9 259\nI0120 21:09:22.635899 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/b6l 397\nI0120 21:09:22.835879 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/mtc 424\nI0120 21:09:23.036036 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/qg5 492\nI0120 21:09:23.235937 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/wsm 538\nI0120 21:09:23.436086 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/qbzd 522\nI0120 21:09:23.636152 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/bl5s 347\nI0120 21:09:23.836300 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/rhkr 239\nI0120 21:09:24.035817 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/w6r9 519\nI0120 21:09:24.236078 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/j485 515\nI0120 21:09:24.435945 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/5zp9 594\nI0120 21:09:24.636577 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/8gp 359\nI0120 21:09:24.836552 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/nq9 293\nI0120 21:09:25.035961 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/44rc 296\nI0120 21:09:25.235934 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/kcs 256\nI0120 21:09:25.435966 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/bwx5 264\nI0120 21:09:25.635990 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/rzwh 323\nI0120 21:09:25.835952 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/phpc 447\nI0120 21:09:26.036032 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/sd8 290\nI0120 21:09:26.235930 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/hzz 470\nI0120 21:09:26.436178 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/ltfq 208\nI0120 21:09:26.636105 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/6mb 207\nI0120 21:09:26.835896 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/f7k4 350\nI0120 21:09:27.035882 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/h2l 441\nI0120 21:09:27.235880 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/tg6z 295\nI0120 21:09:27.435943 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/vszb 591\nI0120 21:09:27.635829 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/6df4 421\nI0120 21:09:27.835865 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/thp5 448\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 Jan 20 21:09:27.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4126' Jan 20 21:09:32.473: INFO: stderr: "" Jan 20 21:09:32.473: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:09:32.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4126" for this suite. • [SLOW TEST:18.672 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":1,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:09:32.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0120 21:09:35.587137 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 21:09:35.587: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:09:35.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4691" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":2,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:09:35.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1636.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1636.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 21:09:48.020: INFO: DNS probes using dns-1636/dns-test-29dcf5e8-f6b2-454c-a18b-fe40d8af8abd succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:09:48.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1636" for this suite. • [SLOW TEST:12.532 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":3,"skipped":59,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:09:48.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 20 21:09:48.319: INFO: Waiting up to 5m0s for pod "pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc" in namespace "emptydir-1321" to be "success or failure" Jan 20 21:09:48.335: INFO: Pod "pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.979648ms Jan 20 21:09:50.351: INFO: Pod "pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032225315s Jan 20 21:09:52.360: INFO: Pod "pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040831427s Jan 20 21:09:54.368: INFO: Pod "pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049751614s Jan 20 21:09:56.377: INFO: Pod "pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057845957s Jan 20 21:09:58.384: INFO: Pod "pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065223246s STEP: Saw pod success Jan 20 21:09:58.384: INFO: Pod "pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc" satisfied condition "success or failure" Jan 20 21:09:58.390: INFO: Trying to get logs from node jerma-node pod pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc container test-container: STEP: delete the pod Jan 20 21:09:58.424: INFO: Waiting for pod pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc to disappear Jan 20 21:09:58.429: INFO: Pod pod-c61e9e08-7189-42f5-ac9e-4420ee9d09fc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:09:58.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1321" for this suite. • [SLOW TEST:10.351 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":73,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:09:58.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8669 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8669 I0120 21:09:58.661422 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8669, replica count: 2 I0120 21:10:01.713042 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 21:10:04.713531 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 21:10:07.714085 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 20 21:10:07.714: INFO: Creating new exec pod Jan 20 21:10:16.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8669 execpodlbdsr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 20 21:10:17.217: INFO: stderr: "I0120 21:10:17.028421 201 log.go:172] (0xc00098a370) (0xc00065ff40) Create stream\nI0120 21:10:17.028743 201 log.go:172] (0xc00098a370) (0xc00065ff40) Stream added, broadcasting: 1\nI0120 21:10:17.033761 201 log.go:172] (0xc00098a370) Reply frame received for 1\nI0120 21:10:17.033808 201 log.go:172] (0xc00098a370) (0xc000aaa140) Create stream\nI0120 21:10:17.033819 201 log.go:172] (0xc00098a370) (0xc000aaa140) Stream added, broadcasting: 3\nI0120 21:10:17.035432 201 log.go:172] (0xc00098a370) Reply frame received for 3\nI0120 21:10:17.035472 201 log.go:172] (0xc00098a370) (0xc000aaa1e0) Create stream\nI0120 21:10:17.035486 201 log.go:172] (0xc00098a370) (0xc000aaa1e0) Stream added, broadcasting: 5\nI0120 21:10:17.036616 201 log.go:172] (0xc00098a370) Reply frame received for 5\nI0120 21:10:17.106566 201 log.go:172] (0xc00098a370) Data frame received for 5\nI0120 21:10:17.106683 201 log.go:172] (0xc000aaa1e0) (5) Data frame handling\nI0120 21:10:17.106707 201 log.go:172] (0xc000aaa1e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0120 21:10:17.121825 201 log.go:172] (0xc00098a370) Data frame received for 5\nI0120 21:10:17.121927 201 log.go:172] (0xc000aaa1e0) (5) Data frame handling\nI0120 21:10:17.121968 201 log.go:172] (0xc000aaa1e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0120 21:10:17.197322 201 log.go:172] (0xc00098a370) (0xc000aaa140) Stream removed, broadcasting: 3\nI0120 21:10:17.197499 201 log.go:172] (0xc00098a370) Data frame received for 1\nI0120 21:10:17.197519 201 log.go:172] (0xc00098a370) (0xc000aaa1e0) Stream removed, broadcasting: 5\nI0120 21:10:17.197578 201 log.go:172] (0xc00065ff40) (1) Data frame handling\nI0120 21:10:17.197602 201 log.go:172] (0xc00065ff40) (1) Data frame sent\nI0120 21:10:17.197608 201 log.go:172] (0xc00098a370) (0xc00065ff40) Stream removed, broadcasting: 1\nI0120 21:10:17.197622 201 log.go:172] (0xc00098a370) Go away received\nI0120 21:10:17.200048 201 log.go:172] (0xc00098a370) (0xc00065ff40) Stream removed, broadcasting: 1\nI0120 21:10:17.200175 201 log.go:172] (0xc00098a370) (0xc000aaa140) Stream removed, broadcasting: 3\nI0120 21:10:17.200199 201 log.go:172] (0xc00098a370) (0xc000aaa1e0) Stream removed, broadcasting: 5\n" Jan 20 21:10:17.218: INFO: stdout: "" Jan 20 21:10:17.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8669 execpodlbdsr -- /bin/sh -x -c nc -zv -t -w 2 10.96.3.88 80' Jan 20 21:10:17.581: INFO: stderr: "I0120 21:10:17.447663 221 log.go:172] (0xc0008f86e0) (0xc000992140) Create stream\nI0120 21:10:17.447834 221 log.go:172] (0xc0008f86e0) (0xc000992140) Stream added, broadcasting: 1\nI0120 21:10:17.451221 221 log.go:172] (0xc0008f86e0) Reply frame received for 1\nI0120 21:10:17.451267 221 log.go:172] (0xc0008f86e0) (0xc000763540) Create stream\nI0120 21:10:17.451275 221 log.go:172] (0xc0008f86e0) (0xc000763540) Stream added, broadcasting: 3\nI0120 21:10:17.452478 221 log.go:172] (0xc0008f86e0) Reply frame received for 3\nI0120 21:10:17.452562 221 log.go:172] (0xc0008f86e0) (0xc0007635e0) Create stream\nI0120 21:10:17.452579 221 log.go:172] (0xc0008f86e0) (0xc0007635e0) Stream added, broadcasting: 5\nI0120 21:10:17.454281 221 log.go:172] (0xc0008f86e0) Reply frame received for 5\nI0120 21:10:17.506660 221 log.go:172] (0xc0008f86e0) Data frame received for 5\nI0120 21:10:17.506894 221 log.go:172] (0xc0007635e0) (5) Data frame handling\nI0120 21:10:17.506945 221 log.go:172] (0xc0007635e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.3.88 80\nI0120 21:10:17.507964 221 log.go:172] (0xc0008f86e0) Data frame received for 5\nI0120 21:10:17.508002 221 log.go:172] (0xc0007635e0) (5) Data frame handling\nI0120 21:10:17.508037 221 log.go:172] (0xc0007635e0) (5) Data frame sent\nConnection to 10.96.3.88 80 port [tcp/http] succeeded!\nI0120 21:10:17.569186 221 log.go:172] (0xc0008f86e0) Data frame received for 1\nI0120 21:10:17.569398 221 log.go:172] (0xc0008f86e0) (0xc000763540) Stream removed, broadcasting: 3\nI0120 21:10:17.569503 221 log.go:172] (0xc000992140) (1) Data frame handling\nI0120 21:10:17.569530 221 log.go:172] (0xc000992140) (1) Data frame sent\nI0120 21:10:17.569554 221 log.go:172] (0xc0008f86e0) (0xc000992140) Stream removed, broadcasting: 1\nI0120 21:10:17.569640 221 log.go:172] (0xc0008f86e0) (0xc0007635e0) Stream removed, broadcasting: 5\nI0120 21:10:17.569684 221 log.go:172] (0xc0008f86e0) Go away received\nI0120 21:10:17.570475 221 log.go:172] (0xc0008f86e0) (0xc000992140) Stream removed, broadcasting: 1\nI0120 21:10:17.570492 221 log.go:172] (0xc0008f86e0) (0xc000763540) Stream removed, broadcasting: 3\nI0120 21:10:17.570502 221 log.go:172] (0xc0008f86e0) (0xc0007635e0) Stream removed, broadcasting: 5\n" Jan 20 21:10:17.581: INFO: stdout: "" Jan 20 21:10:17.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8669 execpodlbdsr -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30173' Jan 20 21:10:17.891: INFO: stderr: "I0120 21:10:17.716019 241 log.go:172] (0xc0007c8a50) (0xc0007c4000) Create stream\nI0120 21:10:17.716181 241 log.go:172] (0xc0007c8a50) (0xc0007c4000) Stream added, broadcasting: 1\nI0120 21:10:17.720856 241 log.go:172] (0xc0007c8a50) Reply frame received for 1\nI0120 21:10:17.720906 241 log.go:172] (0xc0007c8a50) (0xc000685a40) Create stream\nI0120 21:10:17.720915 241 log.go:172] (0xc0007c8a50) (0xc000685a40) Stream added, broadcasting: 3\nI0120 21:10:17.721827 241 log.go:172] (0xc0007c8a50) Reply frame received for 3\nI0120 21:10:17.721865 241 log.go:172] (0xc0007c8a50) (0xc0004ca000) Create stream\nI0120 21:10:17.721873 241 log.go:172] (0xc0007c8a50) (0xc0004ca000) Stream added, broadcasting: 5\nI0120 21:10:17.722829 241 log.go:172] (0xc0007c8a50) Reply frame received for 5\nI0120 21:10:17.777465 241 log.go:172] (0xc0007c8a50) Data frame received for 5\nI0120 21:10:17.777770 241 log.go:172] (0xc0004ca000) (5) Data frame handling\nI0120 21:10:17.777831 241 log.go:172] (0xc0004ca000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30173\nI0120 21:10:17.778306 241 log.go:172] (0xc0007c8a50) Data frame received for 5\nI0120 21:10:17.778332 241 log.go:172] (0xc0004ca000) (5) Data frame handling\nI0120 21:10:17.778350 241 log.go:172] (0xc0004ca000) (5) Data frame sent\nConnection to 10.96.2.250 30173 port [tcp/30173] succeeded!\nI0120 21:10:17.878321 241 log.go:172] (0xc0007c8a50) Data frame received for 1\nI0120 21:10:17.878514 241 log.go:172] (0xc0007c8a50) (0xc000685a40) Stream removed, broadcasting: 3\nI0120 21:10:17.878705 241 log.go:172] (0xc0007c4000) (1) Data frame handling\nI0120 21:10:17.878762 241 log.go:172] (0xc0007c4000) (1) Data frame sent\nI0120 21:10:17.878796 241 log.go:172] (0xc0007c8a50) (0xc0007c4000) Stream removed, broadcasting: 1\nI0120 21:10:17.880184 241 log.go:172] (0xc0007c8a50) (0xc0004ca000) Stream removed, broadcasting: 5\nI0120 21:10:17.880545 241 log.go:172] (0xc0007c8a50) Go away received\nI0120 21:10:17.881156 241 log.go:172] (0xc0007c8a50) (0xc0007c4000) Stream removed, broadcasting: 1\nI0120 21:10:17.881179 241 log.go:172] (0xc0007c8a50) (0xc000685a40) Stream removed, broadcasting: 3\nI0120 21:10:17.881195 241 log.go:172] (0xc0007c8a50) (0xc0004ca000) Stream removed, broadcasting: 5\n" Jan 20 21:10:17.891: INFO: stdout: "" Jan 20 21:10:17.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8669 execpodlbdsr -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30173' Jan 20 21:10:18.244: INFO: stderr: "I0120 21:10:18.060847 262 log.go:172] (0xc000114e70) (0xc0005d01e0) Create stream\nI0120 21:10:18.061107 262 log.go:172] (0xc000114e70) (0xc0005d01e0) Stream added, broadcasting: 1\nI0120 21:10:18.065569 262 log.go:172] (0xc000114e70) Reply frame received for 1\nI0120 21:10:18.065711 262 log.go:172] (0xc000114e70) (0xc0005d0280) Create stream\nI0120 21:10:18.065723 262 log.go:172] (0xc000114e70) (0xc0005d0280) Stream added, broadcasting: 3\nI0120 21:10:18.067507 262 log.go:172] (0xc000114e70) Reply frame received for 3\nI0120 21:10:18.067569 262 log.go:172] (0xc000114e70) (0xc0005ebae0) Create stream\nI0120 21:10:18.067581 262 log.go:172] (0xc000114e70) (0xc0005ebae0) Stream added, broadcasting: 5\nI0120 21:10:18.068879 262 log.go:172] (0xc000114e70) Reply frame received for 5\nI0120 21:10:18.139804 262 log.go:172] (0xc000114e70) Data frame received for 5\nI0120 21:10:18.139903 262 log.go:172] (0xc0005ebae0) (5) Data frame handling\nI0120 21:10:18.139973 262 log.go:172] (0xc0005ebae0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30173\nI0120 21:10:18.151042 262 log.go:172] (0xc000114e70) Data frame received for 5\nI0120 21:10:18.151193 262 log.go:172] (0xc0005ebae0) (5) Data frame handling\nI0120 21:10:18.151235 262 log.go:172] (0xc0005ebae0) (5) Data frame sent\nConnection to 10.96.1.234 30173 port [tcp/30173] succeeded!\nI0120 21:10:18.232746 262 log.go:172] (0xc000114e70) Data frame received for 1\nI0120 21:10:18.232954 262 log.go:172] (0xc000114e70) (0xc0005d0280) Stream removed, broadcasting: 3\nI0120 21:10:18.233133 262 log.go:172] (0xc0005d01e0) (1) Data frame handling\nI0120 21:10:18.233160 262 log.go:172] (0xc000114e70) (0xc0005ebae0) Stream removed, broadcasting: 5\nI0120 21:10:18.233182 262 log.go:172] (0xc0005d01e0) (1) Data frame sent\nI0120 21:10:18.233202 262 log.go:172] (0xc000114e70) (0xc0005d01e0) Stream removed, broadcasting: 1\nI0120 21:10:18.233245 262 log.go:172] (0xc000114e70) Go away received\nI0120 21:10:18.234912 262 log.go:172] (0xc000114e70) (0xc0005d01e0) Stream removed, broadcasting: 1\nI0120 21:10:18.234943 262 log.go:172] (0xc000114e70) (0xc0005d0280) Stream removed, broadcasting: 3\nI0120 21:10:18.234956 262 log.go:172] (0xc000114e70) (0xc0005ebae0) Stream removed, broadcasting: 5\n" Jan 20 21:10:18.244: INFO: stdout: "" Jan 20 21:10:18.244: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:10:18.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8669" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:19.888 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":5,"skipped":74,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:10:18.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 20 21:10:18.529: INFO: namespace kubectl-4275 Jan 20 21:10:18.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4275' Jan 20 21:10:18.989: INFO: stderr: "" Jan 20 21:10:18.989: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 20 21:10:19.999: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:19.999: INFO: Found 0 / 1 Jan 20 21:10:20.999: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:21.000: INFO: Found 0 / 1 Jan 20 21:10:21.998: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:21.998: INFO: Found 0 / 1 Jan 20 21:10:22.996: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:22.996: INFO: Found 0 / 1 Jan 20 21:10:23.999: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:23.999: INFO: Found 0 / 1 Jan 20 21:10:24.997: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:24.997: INFO: Found 0 / 1 Jan 20 21:10:25.994: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:25.995: INFO: Found 0 / 1 Jan 20 21:10:26.999: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:26.999: INFO: Found 0 / 1 Jan 20 21:10:28.024: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:28.024: INFO: Found 0 / 1 Jan 20 21:10:29.505: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:29.505: INFO: Found 1 / 1 Jan 20 21:10:29.505: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 20 21:10:29.511: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:10:29.511: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 20 21:10:29.511: INFO: wait on agnhost-master startup in kubectl-4275 Jan 20 21:10:29.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-xkcl8 agnhost-master --namespace=kubectl-4275' Jan 20 21:10:29.959: INFO: stderr: "" Jan 20 21:10:29.960: INFO: stdout: "Paused\n" STEP: exposing RC Jan 20 21:10:29.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4275' Jan 20 21:10:30.201: INFO: stderr: "" Jan 20 21:10:30.201: INFO: stdout: "service/rm2 exposed\n" Jan 20 21:10:30.210: INFO: Service rm2 in namespace kubectl-4275 found. STEP: exposing service Jan 20 21:10:32.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4275' Jan 20 21:10:32.475: INFO: stderr: "" Jan 20 21:10:32.475: INFO: stdout: "service/rm3 exposed\n" Jan 20 21:10:32.505: INFO: Service rm3 in namespace kubectl-4275 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:10:34.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4275" for this suite. • [SLOW TEST:16.176 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":6,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:10:34.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:10:35.598: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:10:37.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:10:39.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:10:41.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:10:43.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151435, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:10:46.651: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:10:46.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9376" for this suite. STEP: Destroying namespace "webhook-9376-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.480 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":7,"skipped":112,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:10:47.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:10:47.174: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d" in namespace "projected-2450" to be "success or failure" Jan 20 21:10:47.248: INFO: Pod "downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 73.648122ms Jan 20 21:10:49.259: INFO: Pod "downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084920971s Jan 20 21:10:51.267: INFO: Pod "downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093074986s Jan 20 21:10:53.274: INFO: Pod "downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100075807s Jan 20 21:10:55.282: INFO: Pod "downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107392593s Jan 20 21:10:57.289: INFO: Pod "downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114877995s STEP: Saw pod success Jan 20 21:10:57.289: INFO: Pod "downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d" satisfied condition "success or failure" Jan 20 21:10:57.293: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d container client-container: STEP: delete the pod Jan 20 21:10:57.615: INFO: Waiting for pod downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d to disappear Jan 20 21:10:57.622: INFO: Pod downwardapi-volume-2fd769de-2b50-4561-bad9-2b8c6c3a8f8d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:10:57.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2450" for this suite. • [SLOW TEST:10.604 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:10:57.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 20 21:11:05.915: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 20 21:11:16.078: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:11:16.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8093" for this suite. • [SLOW TEST:18.456 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":9,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:11:16.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jan 20 21:11:22.986: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5545 pod-service-account-10d9cd5c-7e1f-48ac-a75c-0fdc4e3e9f96 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 20 21:11:23.339: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5545 pod-service-account-10d9cd5c-7e1f-48ac-a75c-0fdc4e3e9f96 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 20 21:11:23.729: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5545 pod-service-account-10d9cd5c-7e1f-48ac-a75c-0fdc4e3e9f96 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:11:24.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5545" for this suite. • [SLOW TEST:8.081 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":10,"skipped":244,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:11:24.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-2ac2da6f-a1e0-482b-907f-70833e0a3011 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:11:24.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3628" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":11,"skipped":250,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:11:24.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0cca728a-7444-4c96-bf5d-b71336bccd14 STEP: Creating a pod to test consume configMaps Jan 20 21:11:24.448: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53" in namespace "projected-4054" to be "success or failure" Jan 20 21:11:24.481: INFO: Pod "pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53": Phase="Pending", Reason="", readiness=false. Elapsed: 33.366587ms Jan 20 21:11:26.495: INFO: Pod "pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047398214s Jan 20 21:11:28.505: INFO: Pod "pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056868831s Jan 20 21:11:30.521: INFO: Pod "pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07313669s Jan 20 21:11:32.530: INFO: Pod "pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082460131s Jan 20 21:11:34.541: INFO: Pod "pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09297372s STEP: Saw pod success Jan 20 21:11:34.541: INFO: Pod "pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53" satisfied condition "success or failure" Jan 20 21:11:34.546: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53 container projected-configmap-volume-test: STEP: delete the pod Jan 20 21:11:34.594: INFO: Waiting for pod pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53 to disappear Jan 20 21:11:34.604: INFO: Pod pod-projected-configmaps-c2491ad1-4c8a-4c0a-adf8-c9f3a0826f53 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:11:34.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4054" for this suite. • [SLOW TEST:10.304 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":256,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:11:34.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 20 21:11:34.787: INFO: Waiting up to 5m0s for pod "downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1" in namespace "downward-api-1571" to be "success or failure" Jan 20 21:11:34.792: INFO: Pod "downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.532587ms Jan 20 21:11:36.800: INFO: Pod "downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013482416s Jan 20 21:11:38.808: INFO: Pod "downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021557815s Jan 20 21:11:40.818: INFO: Pod "downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030889277s Jan 20 21:11:42.825: INFO: Pod "downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038330382s STEP: Saw pod success Jan 20 21:11:42.825: INFO: Pod "downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1" satisfied condition "success or failure" Jan 20 21:11:42.829: INFO: Trying to get logs from node jerma-node pod downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1 container dapi-container: STEP: delete the pod Jan 20 21:11:42.963: INFO: Waiting for pod downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1 to disappear Jan 20 21:11:42.977: INFO: Pod downward-api-7c25db9b-9af4-455a-87ac-50dcbc1ebeb1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:11:42.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1571" for this suite. • [SLOW TEST:8.370 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":259,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:11:42.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5030 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5030 STEP: Creating statefulset with conflicting port in namespace statefulset-5030 STEP: Waiting until pod test-pod will start running in namespace statefulset-5030 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5030 Jan 20 21:11:51.329: INFO: Observed stateful pod in namespace: statefulset-5030, name: ss-0, uid: cebf1ab4-7da9-40ce-873e-d0b7ce8dbeb0, status phase: Pending. Waiting for statefulset controller to delete. Jan 20 21:11:52.306: INFO: Observed stateful pod in namespace: statefulset-5030, name: ss-0, uid: cebf1ab4-7da9-40ce-873e-d0b7ce8dbeb0, status phase: Failed. Waiting for statefulset controller to delete. Jan 20 21:11:52.316: INFO: Observed stateful pod in namespace: statefulset-5030, name: ss-0, uid: cebf1ab4-7da9-40ce-873e-d0b7ce8dbeb0, status phase: Failed. Waiting for statefulset controller to delete. Jan 20 21:11:52.329: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5030 STEP: Removing pod with conflicting port in namespace statefulset-5030 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5030 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 20 21:12:02.942: INFO: Deleting all statefulset in ns statefulset-5030 Jan 20 21:12:02.945: INFO: Scaling statefulset ss to 0 Jan 20 21:12:12.977: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 21:12:12.983: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:12:13.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5030" for this suite. • [SLOW TEST:30.063 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":14,"skipped":271,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:12:13.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 20 21:12:13.119: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 20 21:12:13.135: INFO: Waiting for terminating namespaces to be deleted... Jan 20 21:12:13.177: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 20 21:12:13.187: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 20 21:12:13.188: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 21:12:13.188: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 20 21:12:13.188: INFO: Container weave ready: true, restart count 1 Jan 20 21:12:13.188: INFO: Container weave-npc ready: true, restart count 0 Jan 20 21:12:13.188: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 20 21:12:13.214: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 21:12:13.214: INFO: Container kube-scheduler ready: true, restart count 3 Jan 20 21:12:13.214: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 21:12:13.214: INFO: Container kube-apiserver ready: true, restart count 1 Jan 20 21:12:13.214: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 21:12:13.214: INFO: Container etcd ready: true, restart count 1 Jan 20 21:12:13.214: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 21:12:13.214: INFO: Container coredns ready: true, restart count 0 Jan 20 21:12:13.214: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 21:12:13.214: INFO: Container coredns ready: true, restart count 0 Jan 20 21:12:13.214: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 21:12:13.214: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 20 21:12:13.214: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 20 21:12:13.214: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 21:12:13.214: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 20 21:12:13.214: INFO: Container weave ready: true, restart count 0 Jan 20 21:12:13.214: INFO: Container weave-npc ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Jan 20 21:12:13.472: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 20 21:12:13.472: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 20 21:12:13.472: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 20 21:12:13.472: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Jan 20 21:12:13.472: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Jan 20 21:12:13.472: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 20 21:12:13.472: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Jan 20 21:12:13.472: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 20 21:12:13.472: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Jan 20 21:12:13.473: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Jan 20 21:12:13.473: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Jan 20 21:12:13.482: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-097984a4-225c-4089-9e40-cf6ee3b480bb.15ebb4a94df76af1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-115/filler-pod-097984a4-225c-4089-9e40-cf6ee3b480bb to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-097984a4-225c-4089-9e40-cf6ee3b480bb.15ebb4aa333a761e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-097984a4-225c-4089-9e40-cf6ee3b480bb.15ebb4ab213019b8], Reason = [Created], Message = [Created container filler-pod-097984a4-225c-4089-9e40-cf6ee3b480bb] STEP: Considering event: Type = [Normal], Name = [filler-pod-097984a4-225c-4089-9e40-cf6ee3b480bb.15ebb4ab49003ab2], Reason = [Started], Message = [Started container filler-pod-097984a4-225c-4089-9e40-cf6ee3b480bb] STEP: Considering event: Type = [Normal], Name = [filler-pod-c44388c5-9f2d-49fb-84c3-88c1b9187427.15ebb4a94ca1e671], Reason = [Scheduled], Message = [Successfully assigned sched-pred-115/filler-pod-c44388c5-9f2d-49fb-84c3-88c1b9187427 to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-c44388c5-9f2d-49fb-84c3-88c1b9187427.15ebb4aa35c6d0a0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c44388c5-9f2d-49fb-84c3-88c1b9187427.15ebb4aafc512398], Reason = [Created], Message = [Created container filler-pod-c44388c5-9f2d-49fb-84c3-88c1b9187427] STEP: Considering event: Type = [Normal], Name = [filler-pod-c44388c5-9f2d-49fb-84c3-88c1b9187427.15ebb4ab1cb5ed03], Reason = [Started], Message = [Started container filler-pod-c44388c5-9f2d-49fb-84c3-88c1b9187427] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ebb4aba4e11ea9], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ebb4aba67b1a69], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:12:24.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-115" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:11.781 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":15,"skipped":272,"failed":0} [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:12:24.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Jan 20 21:12:25.041: INFO: Waiting up to 5m0s for pod "client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb" in namespace "containers-8716" to be "success or failure" Jan 20 21:12:25.085: INFO: Pod "client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb": Phase="Pending", Reason="", readiness=false. Elapsed: 43.545093ms Jan 20 21:12:27.094: INFO: Pod "client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05229642s Jan 20 21:12:29.337: INFO: Pod "client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2956267s Jan 20 21:12:31.614: INFO: Pod "client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572771259s Jan 20 21:12:34.137: INFO: Pod "client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.095716997s Jan 20 21:12:36.145: INFO: Pod "client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.103478402s STEP: Saw pod success Jan 20 21:12:36.145: INFO: Pod "client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb" satisfied condition "success or failure" Jan 20 21:12:36.149: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb container test-container: STEP: delete the pod Jan 20 21:12:36.229: INFO: Waiting for pod client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb to disappear Jan 20 21:12:36.308: INFO: Pod client-containers-e21fa945-2b04-4ad8-a3c2-6ef3c62e43eb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:12:36.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8716" for this suite. • [SLOW TEST:11.484 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":272,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:12:36.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6160/configmap-test-24043927-2fac-4b4b-9c57-00c90eca13ea STEP: Creating a pod to test consume configMaps Jan 20 21:12:37.631: INFO: Waiting up to 5m0s for pod "pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c" in namespace "configmap-6160" to be "success or failure" Jan 20 21:12:37.640: INFO: Pod "pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329324ms Jan 20 21:12:39.652: INFO: Pod "pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020341418s Jan 20 21:12:41.657: INFO: Pod "pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02523843s Jan 20 21:12:43.668: INFO: Pod "pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036084207s Jan 20 21:12:45.674: INFO: Pod "pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042729005s STEP: Saw pod success Jan 20 21:12:45.674: INFO: Pod "pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c" satisfied condition "success or failure" Jan 20 21:12:45.679: INFO: Trying to get logs from node jerma-node pod pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c container env-test: STEP: delete the pod Jan 20 21:12:45.720: INFO: Waiting for pod pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c to disappear Jan 20 21:12:45.731: INFO: Pod pod-configmaps-394e8b10-512f-4ff8-84e7-0cc14dafc59c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:12:45.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6160" for this suite. • [SLOW TEST:9.424 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":279,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:12:45.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jan 20 21:12:45.850: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:13:03.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4267" for this suite. • [SLOW TEST:18.171 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":18,"skipped":295,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:13:03.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:13:04.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8281" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":19,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:13:04.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9a25282e-0ec7-4a50-8ddc-fb2b410c3924 STEP: Creating a pod to test consume secrets Jan 20 21:13:04.339: INFO: Waiting up to 5m0s for pod "pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41" in namespace "secrets-9825" to be "success or failure" Jan 20 21:13:04.343: INFO: Pod "pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 3.866482ms Jan 20 21:13:06.351: INFO: Pod "pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011963558s Jan 20 21:13:08.926: INFO: Pod "pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.586102745s Jan 20 21:13:10.933: INFO: Pod "pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593708682s Jan 20 21:13:12.940: INFO: Pod "pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41": Phase="Pending", Reason="", readiness=false. Elapsed: 8.601024051s Jan 20 21:13:14.945: INFO: Pod "pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.605095869s STEP: Saw pod success Jan 20 21:13:14.945: INFO: Pod "pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41" satisfied condition "success or failure" Jan 20 21:13:14.947: INFO: Trying to get logs from node jerma-node pod pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41 container secret-volume-test: STEP: delete the pod Jan 20 21:13:15.135: INFO: Waiting for pod pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41 to disappear Jan 20 21:13:15.152: INFO: Pod pod-secrets-e9e74583-91a1-4382-8604-fc1a815c5e41 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:13:15.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9825" for this suite. • [SLOW TEST:11.151 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:13:15.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 20 21:13:22.636: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:13:22.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9397" for this suite. • [SLOW TEST:7.413 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":442,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:13:22.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-8e998c41-81b0-4f1c-bbad-fd8e27db1853 STEP: Creating a pod to test consume secrets Jan 20 21:13:23.170: INFO: Waiting up to 5m0s for pod "pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c" in namespace "secrets-4941" to be "success or failure" Jan 20 21:13:23.181: INFO: Pod "pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.490562ms Jan 20 21:13:25.191: INFO: Pod "pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020207681s Jan 20 21:13:27.199: INFO: Pod "pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028586624s Jan 20 21:13:29.206: INFO: Pod "pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035322458s Jan 20 21:13:31.211: INFO: Pod "pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04013446s STEP: Saw pod success Jan 20 21:13:31.211: INFO: Pod "pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c" satisfied condition "success or failure" Jan 20 21:13:31.213: INFO: Trying to get logs from node jerma-node pod pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c container secret-volume-test: STEP: delete the pod Jan 20 21:13:31.332: INFO: Waiting for pod pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c to disappear Jan 20 21:13:31.337: INFO: Pod pod-secrets-13e6e8a3-efac-49f7-9a43-f117c7910f9c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:13:31.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4941" for this suite. • [SLOW TEST:8.674 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:13:31.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6903 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 21:13:31.508: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 21:14:11.896: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6903 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:14:11.896: INFO: >>> kubeConfig: /root/.kube/config I0120 21:14:11.951212 9 log.go:172] (0xc003727290) (0xc00297dd60) Create stream I0120 21:14:11.951363 9 log.go:172] (0xc003727290) (0xc00297dd60) Stream added, broadcasting: 1 I0120 21:14:11.988754 9 log.go:172] (0xc003727290) Reply frame received for 1 I0120 21:14:11.988869 9 log.go:172] (0xc003727290) (0xc0029fe000) Create stream I0120 21:14:11.988884 9 log.go:172] (0xc003727290) (0xc0029fe000) Stream added, broadcasting: 3 I0120 21:14:11.990888 9 log.go:172] (0xc003727290) Reply frame received for 3 I0120 21:14:11.990978 9 log.go:172] (0xc003727290) (0xc0023261e0) Create stream I0120 21:14:11.990997 9 log.go:172] (0xc003727290) (0xc0023261e0) Stream added, broadcasting: 5 I0120 21:14:11.992825 9 log.go:172] (0xc003727290) Reply frame received for 5 I0120 21:14:12.122853 9 log.go:172] (0xc003727290) Data frame received for 3 I0120 21:14:12.123067 9 log.go:172] (0xc0029fe000) (3) Data frame handling I0120 21:14:12.123143 9 log.go:172] (0xc0029fe000) (3) Data frame sent I0120 21:14:12.238805 9 log.go:172] (0xc003727290) Data frame received for 1 I0120 21:14:12.238975 9 log.go:172] (0xc003727290) (0xc0029fe000) Stream removed, broadcasting: 3 I0120 21:14:12.239151 9 log.go:172] (0xc00297dd60) (1) Data frame handling I0120 21:14:12.239210 9 log.go:172] (0xc00297dd60) (1) Data frame sent I0120 21:14:12.239238 9 log.go:172] (0xc003727290) (0xc0023261e0) Stream removed, broadcasting: 5 I0120 21:14:12.239330 9 log.go:172] (0xc003727290) (0xc00297dd60) Stream removed, broadcasting: 1 I0120 21:14:12.239375 9 log.go:172] (0xc003727290) Go away received I0120 21:14:12.240413 9 log.go:172] (0xc003727290) (0xc00297dd60) Stream removed, broadcasting: 1 I0120 21:14:12.240430 9 log.go:172] (0xc003727290) (0xc0029fe000) Stream removed, broadcasting: 3 I0120 21:14:12.240447 9 log.go:172] (0xc003727290) (0xc0023261e0) Stream removed, broadcasting: 5 Jan 20 21:14:12.240: INFO: Waiting for responses: map[] Jan 20 21:14:12.245: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6903 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:14:12.245: INFO: >>> kubeConfig: /root/.kube/config I0120 21:14:12.296929 9 log.go:172] (0xc002bbe000) (0xc0024fc0a0) Create stream I0120 21:14:12.297057 9 log.go:172] (0xc002bbe000) (0xc0024fc0a0) Stream added, broadcasting: 1 I0120 21:14:12.300258 9 log.go:172] (0xc002bbe000) Reply frame received for 1 I0120 21:14:12.300294 9 log.go:172] (0xc002bbe000) (0xc002326320) Create stream I0120 21:14:12.300302 9 log.go:172] (0xc002bbe000) (0xc002326320) Stream added, broadcasting: 3 I0120 21:14:12.301620 9 log.go:172] (0xc002bbe000) Reply frame received for 3 I0120 21:14:12.301712 9 log.go:172] (0xc002bbe000) (0xc0023263c0) Create stream I0120 21:14:12.301735 9 log.go:172] (0xc002bbe000) (0xc0023263c0) Stream added, broadcasting: 5 I0120 21:14:12.303831 9 log.go:172] (0xc002bbe000) Reply frame received for 5 I0120 21:14:12.383444 9 log.go:172] (0xc002bbe000) Data frame received for 3 I0120 21:14:12.383502 9 log.go:172] (0xc002326320) (3) Data frame handling I0120 21:14:12.383520 9 log.go:172] (0xc002326320) (3) Data frame sent I0120 21:14:12.444272 9 log.go:172] (0xc002bbe000) Data frame received for 1 I0120 21:14:12.444399 9 log.go:172] (0xc002bbe000) (0xc0023263c0) Stream removed, broadcasting: 5 I0120 21:14:12.444474 9 log.go:172] (0xc0024fc0a0) (1) Data frame handling I0120 21:14:12.444508 9 log.go:172] (0xc0024fc0a0) (1) Data frame sent I0120 21:14:12.444549 9 log.go:172] (0xc002bbe000) (0xc002326320) Stream removed, broadcasting: 3 I0120 21:14:12.444609 9 log.go:172] (0xc002bbe000) (0xc0024fc0a0) Stream removed, broadcasting: 1 I0120 21:14:12.444665 9 log.go:172] (0xc002bbe000) Go away received I0120 21:14:12.444856 9 log.go:172] (0xc002bbe000) (0xc0024fc0a0) Stream removed, broadcasting: 1 I0120 21:14:12.444870 9 log.go:172] (0xc002bbe000) (0xc002326320) Stream removed, broadcasting: 3 I0120 21:14:12.444879 9 log.go:172] (0xc002bbe000) (0xc0023263c0) Stream removed, broadcasting: 5 Jan 20 21:14:12.445: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:14:12.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6903" for this suite. • [SLOW TEST:41.109 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":486,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:14:12.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:14:12.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5" in namespace "downward-api-1110" to be "success or failure" Jan 20 21:14:12.604: INFO: Pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.261589ms Jan 20 21:14:14.617: INFO: Pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032171178s Jan 20 21:14:16.635: INFO: Pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05057047s Jan 20 21:14:19.541: INFO: Pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.955638613s Jan 20 21:14:21.550: INFO: Pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.965037884s Jan 20 21:14:23.560: INFO: Pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.974736116s Jan 20 21:14:25.569: INFO: Pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.984630291s STEP: Saw pod success Jan 20 21:14:25.570: INFO: Pod "downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5" satisfied condition "success or failure" Jan 20 21:14:25.577: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5 container client-container: STEP: delete the pod Jan 20 21:14:25.621: INFO: Waiting for pod downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5 to disappear Jan 20 21:14:25.626: INFO: Pod downwardapi-volume-820311f1-fa4b-4b16-8ebf-3c0731d56ca5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:14:25.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1110" for this suite. • [SLOW TEST:13.180 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:14:25.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3165.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3165.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3165.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3165.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3165.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3165.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 21:14:38.020: INFO: DNS probes using dns-3165/dns-test-8598f195-ef35-44b8-8dbc-61ff3ec6c1e1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:14:38.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3165" for this suite. • [SLOW TEST:12.783 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":25,"skipped":512,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:14:38.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 20 21:14:54.653: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 20 21:14:54.678: INFO: Pod pod-with-poststart-exec-hook still exists Jan 20 21:14:56.679: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 20 21:14:56.686: INFO: Pod pod-with-poststart-exec-hook still exists Jan 20 21:14:58.679: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 20 21:14:58.695: INFO: Pod pod-with-poststart-exec-hook still exists Jan 20 21:15:00.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 20 21:15:00.690: INFO: Pod pod-with-poststart-exec-hook still exists Jan 20 21:15:02.679: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 20 21:15:02.687: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:15:02.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9382" for this suite. • [SLOW TEST:24.277 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:15:02.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-26cb7268-b3a2-4f4b-9cb7-084f008d6e82 STEP: Creating a pod to test consume secrets Jan 20 21:15:02.965: INFO: Waiting up to 5m0s for pod "pod-secrets-702be061-236f-43ba-828d-0604c095f601" in namespace "secrets-8942" to be "success or failure" Jan 20 21:15:02.988: INFO: Pod "pod-secrets-702be061-236f-43ba-828d-0604c095f601": Phase="Pending", Reason="", readiness=false. Elapsed: 23.10845ms Jan 20 21:15:04.997: INFO: Pod "pod-secrets-702be061-236f-43ba-828d-0604c095f601": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031788202s Jan 20 21:15:07.004: INFO: Pod "pod-secrets-702be061-236f-43ba-828d-0604c095f601": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038560891s Jan 20 21:15:09.009: INFO: Pod "pod-secrets-702be061-236f-43ba-828d-0604c095f601": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044050864s Jan 20 21:15:11.021: INFO: Pod "pod-secrets-702be061-236f-43ba-828d-0604c095f601": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055588997s STEP: Saw pod success Jan 20 21:15:11.021: INFO: Pod "pod-secrets-702be061-236f-43ba-828d-0604c095f601" satisfied condition "success or failure" Jan 20 21:15:11.076: INFO: Trying to get logs from node jerma-node pod pod-secrets-702be061-236f-43ba-828d-0604c095f601 container secret-volume-test: STEP: delete the pod Jan 20 21:15:11.132: INFO: Waiting for pod pod-secrets-702be061-236f-43ba-828d-0604c095f601 to disappear Jan 20 21:15:11.143: INFO: Pod pod-secrets-702be061-236f-43ba-828d-0604c095f601 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:15:11.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8942" for this suite. STEP: Destroying namespace "secret-namespace-673" for this suite. • [SLOW TEST:8.464 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":544,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:15:11.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-08c517b3-557a-4c05-85f7-1f14a3559122 STEP: Creating a pod to test consume secrets Jan 20 21:15:11.296: INFO: Waiting up to 5m0s for pod "pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9" in namespace "secrets-2918" to be "success or failure" Jan 20 21:15:11.311: INFO: Pod "pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.682375ms Jan 20 21:15:13.320: INFO: Pod "pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023639112s Jan 20 21:15:15.328: INFO: Pod "pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032116184s Jan 20 21:15:17.339: INFO: Pod "pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043166382s Jan 20 21:15:19.348: INFO: Pod "pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05230213s Jan 20 21:15:21.357: INFO: Pod "pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060870816s STEP: Saw pod success Jan 20 21:15:21.357: INFO: Pod "pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9" satisfied condition "success or failure" Jan 20 21:15:21.362: INFO: Trying to get logs from node jerma-node pod pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9 container secret-volume-test: STEP: delete the pod Jan 20 21:15:21.483: INFO: Waiting for pod pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9 to disappear Jan 20 21:15:21.488: INFO: Pod pod-secrets-032df8e0-f70f-4a2a-b252-ebec003c88f9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:15:21.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2918" for this suite. • [SLOW TEST:10.340 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:15:21.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:15:21.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4037" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":29,"skipped":573,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:15:21.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6810 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6810 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6810 Jan 20 21:15:21.905: INFO: Found 0 stateful pods, waiting for 1 Jan 20 21:15:31.915: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 20 21:15:31.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 21:15:32.431: INFO: stderr: "I0120 21:15:32.214611 456 log.go:172] (0xc0009ba000) (0xc0006e26e0) Create stream\nI0120 21:15:32.215292 456 log.go:172] (0xc0009ba000) (0xc0006e26e0) Stream added, broadcasting: 1\nI0120 21:15:32.225137 456 log.go:172] (0xc0009ba000) Reply frame received for 1\nI0120 21:15:32.225372 456 log.go:172] (0xc0009ba000) (0xc00036d4a0) Create stream\nI0120 21:15:32.225422 456 log.go:172] (0xc0009ba000) (0xc00036d4a0) Stream added, broadcasting: 3\nI0120 21:15:32.250025 456 log.go:172] (0xc0009ba000) Reply frame received for 3\nI0120 21:15:32.250166 456 log.go:172] (0xc0009ba000) (0xc00036d540) Create stream\nI0120 21:15:32.250191 456 log.go:172] (0xc0009ba000) (0xc00036d540) Stream added, broadcasting: 5\nI0120 21:15:32.254418 456 log.go:172] (0xc0009ba000) Reply frame received for 5\nI0120 21:15:32.332849 456 log.go:172] (0xc0009ba000) Data frame received for 5\nI0120 21:15:32.332912 456 log.go:172] (0xc00036d540) (5) Data frame handling\nI0120 21:15:32.332927 456 log.go:172] (0xc00036d540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 21:15:32.360979 456 log.go:172] (0xc0009ba000) Data frame received for 3\nI0120 21:15:32.361049 456 log.go:172] (0xc00036d4a0) (3) Data frame handling\nI0120 21:15:32.361095 456 log.go:172] (0xc00036d4a0) (3) Data frame sent\nI0120 21:15:32.420795 456 log.go:172] (0xc0009ba000) Data frame received for 1\nI0120 21:15:32.420944 456 log.go:172] (0xc0009ba000) (0xc00036d4a0) Stream removed, broadcasting: 3\nI0120 21:15:32.421032 456 log.go:172] (0xc0006e26e0) (1) Data frame handling\nI0120 21:15:32.421071 456 log.go:172] (0xc0006e26e0) (1) Data frame sent\nI0120 21:15:32.421110 456 log.go:172] (0xc0009ba000) (0xc00036d540) Stream removed, broadcasting: 5\nI0120 21:15:32.421148 456 log.go:172] (0xc0009ba000) (0xc0006e26e0) Stream removed, broadcasting: 1\nI0120 21:15:32.421182 456 log.go:172] (0xc0009ba000) Go away received\nI0120 21:15:32.422005 456 log.go:172] (0xc0009ba000) (0xc0006e26e0) Stream removed, broadcasting: 1\nI0120 21:15:32.422045 456 log.go:172] (0xc0009ba000) (0xc00036d4a0) Stream removed, broadcasting: 3\nI0120 21:15:32.422061 456 log.go:172] (0xc0009ba000) (0xc00036d540) Stream removed, broadcasting: 5\n" Jan 20 21:15:32.431: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 21:15:32.431: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 21:15:32.437: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 20 21:15:42.448: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 20 21:15:42.448: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 21:15:42.476: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999607s Jan 20 21:15:43.498: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984828559s Jan 20 21:15:44.512: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.962342526s Jan 20 21:15:45.525: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.948150642s Jan 20 21:15:46.539: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.935507051s Jan 20 21:15:47.549: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.921715902s Jan 20 21:15:48.561: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.911357195s Jan 20 21:15:49.571: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.899911077s Jan 20 21:15:50.587: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.889768007s Jan 20 21:15:51.599: INFO: Verifying statefulset ss doesn't scale past 1 for another 873.787967ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6810 Jan 20 21:15:52.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 21:15:52.983: INFO: stderr: "I0120 21:15:52.808377 479 log.go:172] (0xc0004f1340) (0xc000846820) Create stream\nI0120 21:15:52.808781 479 log.go:172] (0xc0004f1340) (0xc000846820) Stream added, broadcasting: 1\nI0120 21:15:52.815231 479 log.go:172] (0xc0004f1340) Reply frame received for 1\nI0120 21:15:52.815388 479 log.go:172] (0xc0004f1340) (0xc0001fb360) Create stream\nI0120 21:15:52.815704 479 log.go:172] (0xc0004f1340) (0xc0001fb360) Stream added, broadcasting: 3\nI0120 21:15:52.817341 479 log.go:172] (0xc0004f1340) Reply frame received for 3\nI0120 21:15:52.817364 479 log.go:172] (0xc0004f1340) (0xc0001fb400) Create stream\nI0120 21:15:52.817378 479 log.go:172] (0xc0004f1340) (0xc0001fb400) Stream added, broadcasting: 5\nI0120 21:15:52.818363 479 log.go:172] (0xc0004f1340) Reply frame received for 5\nI0120 21:15:52.888406 479 log.go:172] (0xc0004f1340) Data frame received for 5\nI0120 21:15:52.888481 479 log.go:172] (0xc0001fb400) (5) Data frame handling\nI0120 21:15:52.888508 479 log.go:172] (0xc0001fb400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 21:15:52.888659 479 log.go:172] (0xc0004f1340) Data frame received for 3\nI0120 21:15:52.888672 479 log.go:172] (0xc0001fb360) (3) Data frame handling\nI0120 21:15:52.888686 479 log.go:172] (0xc0001fb360) (3) Data frame sent\nI0120 21:15:52.965602 479 log.go:172] (0xc0004f1340) (0xc0001fb360) Stream removed, broadcasting: 3\nI0120 21:15:52.965921 479 log.go:172] (0xc0004f1340) Data frame received for 1\nI0120 21:15:52.965976 479 log.go:172] (0xc000846820) (1) Data frame handling\nI0120 21:15:52.966027 479 log.go:172] (0xc000846820) (1) Data frame sent\nI0120 21:15:52.966168 479 log.go:172] (0xc0004f1340) (0xc000846820) Stream removed, broadcasting: 1\nI0120 21:15:52.966240 479 log.go:172] (0xc0004f1340) (0xc0001fb400) Stream removed, broadcasting: 5\nI0120 21:15:52.966262 479 log.go:172] (0xc0004f1340) Go away received\nI0120 21:15:52.967573 479 log.go:172] (0xc0004f1340) (0xc000846820) Stream removed, broadcasting: 1\nI0120 21:15:52.967592 479 log.go:172] (0xc0004f1340) (0xc0001fb360) Stream removed, broadcasting: 3\nI0120 21:15:52.967600 479 log.go:172] (0xc0004f1340) (0xc0001fb400) Stream removed, broadcasting: 5\n" Jan 20 21:15:52.983: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 21:15:52.983: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 21:15:52.991: INFO: Found 1 stateful pods, waiting for 3 Jan 20 21:16:03.001: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:16:03.001: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:16:03.001: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 20 21:16:13.002: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:16:13.002: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:16:13.002: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 20 21:16:13.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 21:16:13.399: INFO: stderr: "I0120 21:16:13.232051 494 log.go:172] (0xc000b998c0) (0xc000b8e820) Create stream\nI0120 21:16:13.232423 494 log.go:172] (0xc000b998c0) (0xc000b8e820) Stream added, broadcasting: 1\nI0120 21:16:13.240675 494 log.go:172] (0xc000b998c0) Reply frame received for 1\nI0120 21:16:13.240855 494 log.go:172] (0xc000b998c0) (0xc000ab2000) Create stream\nI0120 21:16:13.240890 494 log.go:172] (0xc000b998c0) (0xc000ab2000) Stream added, broadcasting: 3\nI0120 21:16:13.243131 494 log.go:172] (0xc000b998c0) Reply frame received for 3\nI0120 21:16:13.243166 494 log.go:172] (0xc000b998c0) (0xc000afa000) Create stream\nI0120 21:16:13.243178 494 log.go:172] (0xc000b998c0) (0xc000afa000) Stream added, broadcasting: 5\nI0120 21:16:13.245574 494 log.go:172] (0xc000b998c0) Reply frame received for 5\nI0120 21:16:13.321868 494 log.go:172] (0xc000b998c0) Data frame received for 3\nI0120 21:16:13.321946 494 log.go:172] (0xc000ab2000) (3) Data frame handling\nI0120 21:16:13.321973 494 log.go:172] (0xc000ab2000) (3) Data frame sent\nI0120 21:16:13.322032 494 log.go:172] (0xc000b998c0) Data frame received for 5\nI0120 21:16:13.322067 494 log.go:172] (0xc000afa000) (5) Data frame handling\nI0120 21:16:13.322099 494 log.go:172] (0xc000afa000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 21:16:13.389162 494 log.go:172] (0xc000b998c0) Data frame received for 1\nI0120 21:16:13.389212 494 log.go:172] (0xc000b998c0) (0xc000afa000) Stream removed, broadcasting: 5\nI0120 21:16:13.389329 494 log.go:172] (0xc000b8e820) (1) Data frame handling\nI0120 21:16:13.389468 494 log.go:172] (0xc000b998c0) (0xc000ab2000) Stream removed, broadcasting: 3\nI0120 21:16:13.389538 494 log.go:172] (0xc000b8e820) (1) Data frame sent\nI0120 21:16:13.389559 494 log.go:172] (0xc000b998c0) (0xc000b8e820) Stream removed, broadcasting: 1\nI0120 21:16:13.389587 494 log.go:172] (0xc000b998c0) Go away received\nI0120 21:16:13.390774 494 log.go:172] (0xc000b998c0) (0xc000b8e820) Stream removed, broadcasting: 1\nI0120 21:16:13.390878 494 log.go:172] (0xc000b998c0) (0xc000ab2000) Stream removed, broadcasting: 3\nI0120 21:16:13.391049 494 log.go:172] (0xc000b998c0) (0xc000afa000) Stream removed, broadcasting: 5\n" Jan 20 21:16:13.399: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 21:16:13.399: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 21:16:13.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 21:16:13.939: INFO: stderr: "I0120 21:16:13.660338 514 log.go:172] (0xc000a35290) (0xc0009f0500) Create stream\nI0120 21:16:13.660814 514 log.go:172] (0xc000a35290) (0xc0009f0500) Stream added, broadcasting: 1\nI0120 21:16:13.676623 514 log.go:172] (0xc000a35290) Reply frame received for 1\nI0120 21:16:13.676730 514 log.go:172] (0xc000a35290) (0xc00093e000) Create stream\nI0120 21:16:13.676749 514 log.go:172] (0xc000a35290) (0xc00093e000) Stream added, broadcasting: 3\nI0120 21:16:13.677959 514 log.go:172] (0xc000a35290) Reply frame received for 3\nI0120 21:16:13.678008 514 log.go:172] (0xc000a35290) (0xc00068e5a0) Create stream\nI0120 21:16:13.678020 514 log.go:172] (0xc000a35290) (0xc00068e5a0) Stream added, broadcasting: 5\nI0120 21:16:13.678974 514 log.go:172] (0xc000a35290) Reply frame received for 5\nI0120 21:16:13.744850 514 log.go:172] (0xc000a35290) Data frame received for 5\nI0120 21:16:13.745025 514 log.go:172] (0xc00068e5a0) (5) Data frame handling\nI0120 21:16:13.745102 514 log.go:172] (0xc00068e5a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 21:16:13.782089 514 log.go:172] (0xc000a35290) Data frame received for 3\nI0120 21:16:13.782160 514 log.go:172] (0xc00093e000) (3) Data frame handling\nI0120 21:16:13.782198 514 log.go:172] (0xc00093e000) (3) Data frame sent\nI0120 21:16:13.913652 514 log.go:172] (0xc000a35290) (0xc00068e5a0) Stream removed, broadcasting: 5\nI0120 21:16:13.914346 514 log.go:172] (0xc000a35290) Data frame received for 1\nI0120 21:16:13.914728 514 log.go:172] (0xc000a35290) (0xc00093e000) Stream removed, broadcasting: 3\nI0120 21:16:13.915150 514 log.go:172] (0xc0009f0500) (1) Data frame handling\nI0120 21:16:13.915203 514 log.go:172] (0xc0009f0500) (1) Data frame sent\nI0120 21:16:13.915264 514 log.go:172] (0xc000a35290) (0xc0009f0500) Stream removed, broadcasting: 1\nI0120 21:16:13.915355 514 log.go:172] (0xc000a35290) Go away received\nI0120 21:16:13.917782 514 log.go:172] (0xc000a35290) (0xc0009f0500) Stream removed, broadcasting: 1\nI0120 21:16:13.917808 514 log.go:172] (0xc000a35290) (0xc00093e000) Stream removed, broadcasting: 3\nI0120 21:16:13.917822 514 log.go:172] (0xc000a35290) (0xc00068e5a0) Stream removed, broadcasting: 5\n" Jan 20 21:16:13.940: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 21:16:13.940: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 21:16:13.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 21:16:14.387: INFO: stderr: "I0120 21:16:14.178148 536 log.go:172] (0xc0003c3ef0) (0xc000a26f00) Create stream\nI0120 21:16:14.178377 536 log.go:172] (0xc0003c3ef0) (0xc000a26f00) Stream added, broadcasting: 1\nI0120 21:16:14.182670 536 log.go:172] (0xc0003c3ef0) Reply frame received for 1\nI0120 21:16:14.182722 536 log.go:172] (0xc0003c3ef0) (0xc000b9c320) Create stream\nI0120 21:16:14.182746 536 log.go:172] (0xc0003c3ef0) (0xc000b9c320) Stream added, broadcasting: 3\nI0120 21:16:14.183866 536 log.go:172] (0xc0003c3ef0) Reply frame received for 3\nI0120 21:16:14.183897 536 log.go:172] (0xc0003c3ef0) (0xc000838000) Create stream\nI0120 21:16:14.183912 536 log.go:172] (0xc0003c3ef0) (0xc000838000) Stream added, broadcasting: 5\nI0120 21:16:14.184959 536 log.go:172] (0xc0003c3ef0) Reply frame received for 5\nI0120 21:16:14.241577 536 log.go:172] (0xc0003c3ef0) Data frame received for 5\nI0120 21:16:14.241694 536 log.go:172] (0xc000838000) (5) Data frame handling\nI0120 21:16:14.241742 536 log.go:172] (0xc000838000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 21:16:14.284150 536 log.go:172] (0xc0003c3ef0) Data frame received for 3\nI0120 21:16:14.284251 536 log.go:172] (0xc000b9c320) (3) Data frame handling\nI0120 21:16:14.284283 536 log.go:172] (0xc000b9c320) (3) Data frame sent\nI0120 21:16:14.377948 536 log.go:172] (0xc0003c3ef0) Data frame received for 1\nI0120 21:16:14.378190 536 log.go:172] (0xc0003c3ef0) (0xc000b9c320) Stream removed, broadcasting: 3\nI0120 21:16:14.378396 536 log.go:172] (0xc0003c3ef0) (0xc000838000) Stream removed, broadcasting: 5\nI0120 21:16:14.378450 536 log.go:172] (0xc000a26f00) (1) Data frame handling\nI0120 21:16:14.378479 536 log.go:172] (0xc000a26f00) (1) Data frame sent\nI0120 21:16:14.378490 536 log.go:172] (0xc0003c3ef0) (0xc000a26f00) Stream removed, broadcasting: 1\nI0120 21:16:14.378521 536 log.go:172] (0xc0003c3ef0) Go away received\nI0120 21:16:14.379522 536 log.go:172] (0xc0003c3ef0) (0xc000a26f00) Stream removed, broadcasting: 1\nI0120 21:16:14.379533 536 log.go:172] (0xc0003c3ef0) (0xc000b9c320) Stream removed, broadcasting: 3\nI0120 21:16:14.379537 536 log.go:172] (0xc0003c3ef0) (0xc000838000) Stream removed, broadcasting: 5\n" Jan 20 21:16:14.387: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 21:16:14.387: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 21:16:14.387: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 21:16:14.391: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 20 21:16:24.439: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 20 21:16:24.439: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 20 21:16:24.439: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 20 21:16:24.471: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999671s Jan 20 21:16:25.479: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978277352s Jan 20 21:16:26.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970165489s Jan 20 21:16:27.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.946170608s Jan 20 21:16:28.562: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.901730711s Jan 20 21:16:29.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.88724929s Jan 20 21:16:30.807: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.879208753s Jan 20 21:16:31.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.642042766s Jan 20 21:16:32.824: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.633436123s Jan 20 21:16:33.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 624.7421ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6810 Jan 20 21:16:34.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 21:16:35.263: INFO: stderr: "I0120 21:16:35.084221 556 log.go:172] (0xc000a65080) (0xc000a60500) Create stream\nI0120 21:16:35.084457 556 log.go:172] (0xc000a65080) (0xc000a60500) Stream added, broadcasting: 1\nI0120 21:16:35.090402 556 log.go:172] (0xc000a65080) Reply frame received for 1\nI0120 21:16:35.090478 556 log.go:172] (0xc000a65080) (0xc000a341e0) Create stream\nI0120 21:16:35.090506 556 log.go:172] (0xc000a65080) (0xc000a341e0) Stream added, broadcasting: 3\nI0120 21:16:35.093602 556 log.go:172] (0xc000a65080) Reply frame received for 3\nI0120 21:16:35.093635 556 log.go:172] (0xc000a65080) (0xc000a52280) Create stream\nI0120 21:16:35.093649 556 log.go:172] (0xc000a65080) (0xc000a52280) Stream added, broadcasting: 5\nI0120 21:16:35.095303 556 log.go:172] (0xc000a65080) Reply frame received for 5\nI0120 21:16:35.167307 556 log.go:172] (0xc000a65080) Data frame received for 5\nI0120 21:16:35.167469 556 log.go:172] (0xc000a52280) (5) Data frame handling\nI0120 21:16:35.167505 556 log.go:172] (0xc000a65080) Data frame received for 3\nI0120 21:16:35.167527 556 log.go:172] (0xc000a341e0) (3) Data frame handling\nI0120 21:16:35.167539 556 log.go:172] (0xc000a341e0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 21:16:35.167591 556 log.go:172] (0xc000a52280) (5) Data frame sent\nI0120 21:16:35.246019 556 log.go:172] (0xc000a65080) (0xc000a52280) Stream removed, broadcasting: 5\nI0120 21:16:35.246306 556 log.go:172] (0xc000a65080) Data frame received for 1\nI0120 21:16:35.246526 556 log.go:172] (0xc000a65080) (0xc000a341e0) Stream removed, broadcasting: 3\nI0120 21:16:35.246596 556 log.go:172] (0xc000a60500) (1) Data frame handling\nI0120 21:16:35.246632 556 log.go:172] (0xc000a60500) (1) Data frame sent\nI0120 21:16:35.246648 556 log.go:172] (0xc000a65080) (0xc000a60500) Stream removed, broadcasting: 1\nI0120 21:16:35.246666 556 log.go:172] (0xc000a65080) Go away received\nI0120 21:16:35.248517 556 log.go:172] (0xc000a65080) (0xc000a60500) Stream removed, broadcasting: 1\nI0120 21:16:35.248528 556 log.go:172] (0xc000a65080) (0xc000a341e0) Stream removed, broadcasting: 3\nI0120 21:16:35.248532 556 log.go:172] (0xc000a65080) (0xc000a52280) Stream removed, broadcasting: 5\n" Jan 20 21:16:35.263: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 21:16:35.263: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 21:16:35.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 21:16:35.667: INFO: stderr: "I0120 21:16:35.494143 575 log.go:172] (0xc000bc66e0) (0xc0006e3e00) Create stream\nI0120 21:16:35.494306 575 log.go:172] (0xc000bc66e0) (0xc0006e3e00) Stream added, broadcasting: 1\nI0120 21:16:35.497765 575 log.go:172] (0xc000bc66e0) Reply frame received for 1\nI0120 21:16:35.497841 575 log.go:172] (0xc000bc66e0) (0xc0005fc6e0) Create stream\nI0120 21:16:35.497850 575 log.go:172] (0xc000bc66e0) (0xc0005fc6e0) Stream added, broadcasting: 3\nI0120 21:16:35.499285 575 log.go:172] (0xc000bc66e0) Reply frame received for 3\nI0120 21:16:35.499347 575 log.go:172] (0xc000bc66e0) (0xc0009e0000) Create stream\nI0120 21:16:35.499364 575 log.go:172] (0xc000bc66e0) (0xc0009e0000) Stream added, broadcasting: 5\nI0120 21:16:35.500441 575 log.go:172] (0xc000bc66e0) Reply frame received for 5\nI0120 21:16:35.561423 575 log.go:172] (0xc000bc66e0) Data frame received for 3\nI0120 21:16:35.561572 575 log.go:172] (0xc0005fc6e0) (3) Data frame handling\nI0120 21:16:35.561607 575 log.go:172] (0xc0005fc6e0) (3) Data frame sent\nI0120 21:16:35.561657 575 log.go:172] (0xc000bc66e0) Data frame received for 5\nI0120 21:16:35.561663 575 log.go:172] (0xc0009e0000) (5) Data frame handling\nI0120 21:16:35.561706 575 log.go:172] (0xc0009e0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 21:16:35.656375 575 log.go:172] (0xc000bc66e0) Data frame received for 1\nI0120 21:16:35.656549 575 log.go:172] (0xc000bc66e0) (0xc0005fc6e0) Stream removed, broadcasting: 3\nI0120 21:16:35.656714 575 log.go:172] (0xc0006e3e00) (1) Data frame handling\nI0120 21:16:35.656785 575 log.go:172] (0xc0006e3e00) (1) Data frame sent\nI0120 21:16:35.656854 575 log.go:172] (0xc000bc66e0) (0xc0009e0000) Stream removed, broadcasting: 5\nI0120 21:16:35.656903 575 log.go:172] (0xc000bc66e0) (0xc0006e3e00) Stream removed, broadcasting: 1\nI0120 21:16:35.656944 575 log.go:172] (0xc000bc66e0) Go away received\nI0120 21:16:35.658566 575 log.go:172] (0xc000bc66e0) (0xc0006e3e00) Stream removed, broadcasting: 1\nI0120 21:16:35.658595 575 log.go:172] (0xc000bc66e0) (0xc0005fc6e0) Stream removed, broadcasting: 3\nI0120 21:16:35.658610 575 log.go:172] (0xc000bc66e0) (0xc0009e0000) Stream removed, broadcasting: 5\n" Jan 20 21:16:35.667: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 21:16:35.667: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 21:16:35.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6810 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 21:16:36.118: INFO: stderr: "I0120 21:16:35.838633 597 log.go:172] (0xc000a72000) (0xc0007c0000) Create stream\nI0120 21:16:35.838958 597 log.go:172] (0xc000a72000) (0xc0007c0000) Stream added, broadcasting: 1\nI0120 21:16:35.841442 597 log.go:172] (0xc000a72000) Reply frame received for 1\nI0120 21:16:35.841505 597 log.go:172] (0xc000a72000) (0xc0007c00a0) Create stream\nI0120 21:16:35.841514 597 log.go:172] (0xc000a72000) (0xc0007c00a0) Stream added, broadcasting: 3\nI0120 21:16:35.843654 597 log.go:172] (0xc000a72000) Reply frame received for 3\nI0120 21:16:35.843779 597 log.go:172] (0xc000a72000) (0xc0007c0140) Create stream\nI0120 21:16:35.843793 597 log.go:172] (0xc000a72000) (0xc0007c0140) Stream added, broadcasting: 5\nI0120 21:16:35.847715 597 log.go:172] (0xc000a72000) Reply frame received for 5\nI0120 21:16:35.957734 597 log.go:172] (0xc000a72000) Data frame received for 5\nI0120 21:16:35.957881 597 log.go:172] (0xc0007c0140) (5) Data frame handling\nI0120 21:16:35.957911 597 log.go:172] (0xc0007c0140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 21:16:35.957994 597 log.go:172] (0xc000a72000) Data frame received for 3\nI0120 21:16:35.958016 597 log.go:172] (0xc0007c00a0) (3) Data frame handling\nI0120 21:16:35.958035 597 log.go:172] (0xc0007c00a0) (3) Data frame sent\nI0120 21:16:36.097769 597 log.go:172] (0xc000a72000) (0xc0007c00a0) Stream removed, broadcasting: 3\nI0120 21:16:36.098176 597 log.go:172] (0xc000a72000) Data frame received for 1\nI0120 21:16:36.098189 597 log.go:172] (0xc0007c0000) (1) Data frame handling\nI0120 21:16:36.098211 597 log.go:172] (0xc0007c0000) (1) Data frame sent\nI0120 21:16:36.098218 597 log.go:172] (0xc000a72000) (0xc0007c0000) Stream removed, broadcasting: 1\nI0120 21:16:36.099339 597 log.go:172] (0xc000a72000) (0xc0007c0140) Stream removed, broadcasting: 5\nI0120 21:16:36.099943 597 log.go:172] (0xc000a72000) Go away received\nI0120 21:16:36.100754 597 log.go:172] (0xc000a72000) (0xc0007c0000) Stream removed, broadcasting: 1\nI0120 21:16:36.100918 597 log.go:172] (0xc000a72000) (0xc0007c00a0) Stream removed, broadcasting: 3\nI0120 21:16:36.101035 597 log.go:172] (0xc000a72000) (0xc0007c0140) Stream removed, broadcasting: 5\n" Jan 20 21:16:36.118: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 21:16:36.118: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 21:16:36.118: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 20 21:17:06.213: INFO: Deleting all statefulset in ns statefulset-6810 Jan 20 21:17:06.218: INFO: Scaling statefulset ss to 0 Jan 20 21:17:06.228: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 21:17:06.230: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:17:06.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6810" for this suite. • [SLOW TEST:104.551 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":30,"skipped":579,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:17:06.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-01a87d8a-0264-4993-8bbc-8407b40e79d1 STEP: Creating a pod to test consume secrets Jan 20 21:17:06.393: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d" in namespace "projected-1191" to be "success or failure" Jan 20 21:17:06.400: INFO: Pod "pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.719494ms Jan 20 21:17:08.410: INFO: Pod "pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016682741s Jan 20 21:17:10.419: INFO: Pod "pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025784152s Jan 20 21:17:12.426: INFO: Pod "pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032768721s Jan 20 21:17:14.435: INFO: Pod "pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041466821s STEP: Saw pod success Jan 20 21:17:14.435: INFO: Pod "pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d" satisfied condition "success or failure" Jan 20 21:17:14.440: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d container projected-secret-volume-test: STEP: delete the pod Jan 20 21:17:14.514: INFO: Waiting for pod pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d to disappear Jan 20 21:17:14.552: INFO: Pod pod-projected-secrets-9f042da2-96a9-4cda-8d60-dbf22927948d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:17:14.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1191" for this suite. • [SLOW TEST:8.298 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":583,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:17:14.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 20 21:17:24.815: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4250 PodName:pod-sharedvolume-ec0525dc-512c-4b50-8f7b-c2b92bf302f2 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:17:24.815: INFO: >>> kubeConfig: /root/.kube/config I0120 21:17:24.881081 9 log.go:172] (0xc003726370) (0xc0024fd5e0) Create stream I0120 21:17:24.881184 9 log.go:172] (0xc003726370) (0xc0024fd5e0) Stream added, broadcasting: 1 I0120 21:17:24.893683 9 log.go:172] (0xc003726370) Reply frame received for 1 I0120 21:17:24.893847 9 log.go:172] (0xc003726370) (0xc00297d540) Create stream I0120 21:17:24.893882 9 log.go:172] (0xc003726370) (0xc00297d540) Stream added, broadcasting: 3 I0120 21:17:24.898829 9 log.go:172] (0xc003726370) Reply frame received for 3 I0120 21:17:24.898878 9 log.go:172] (0xc003726370) (0xc0024fd720) Create stream I0120 21:17:24.898897 9 log.go:172] (0xc003726370) (0xc0024fd720) Stream added, broadcasting: 5 I0120 21:17:24.901322 9 log.go:172] (0xc003726370) Reply frame received for 5 I0120 21:17:25.026814 9 log.go:172] (0xc003726370) Data frame received for 3 I0120 21:17:25.026963 9 log.go:172] (0xc00297d540) (3) Data frame handling I0120 21:17:25.027038 9 log.go:172] (0xc00297d540) (3) Data frame sent I0120 21:17:25.149585 9 log.go:172] (0xc003726370) Data frame received for 1 I0120 21:17:25.149693 9 log.go:172] (0xc003726370) (0xc00297d540) Stream removed, broadcasting: 3 I0120 21:17:25.150040 9 log.go:172] (0xc0024fd5e0) (1) Data frame handling I0120 21:17:25.150172 9 log.go:172] (0xc0024fd5e0) (1) Data frame sent I0120 21:17:25.150242 9 log.go:172] (0xc003726370) (0xc0024fd720) Stream removed, broadcasting: 5 I0120 21:17:25.150366 9 log.go:172] (0xc003726370) (0xc0024fd5e0) Stream removed, broadcasting: 1 I0120 21:17:25.150475 9 log.go:172] (0xc003726370) Go away received I0120 21:17:25.150744 9 log.go:172] (0xc003726370) (0xc0024fd5e0) Stream removed, broadcasting: 1 I0120 21:17:25.150770 9 log.go:172] (0xc003726370) (0xc00297d540) Stream removed, broadcasting: 3 I0120 21:17:25.150856 9 log.go:172] (0xc003726370) (0xc0024fd720) Stream removed, broadcasting: 5 Jan 20 21:17:25.150: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:17:25.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4250" for this suite. • [SLOW TEST:10.600 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":32,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:17:25.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:17:41.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5559" for this suite. • [SLOW TEST:16.245 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":33,"skipped":612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:17:41.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:17:41.536: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 20 21:17:45.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3594 create -f -' Jan 20 21:17:48.435: INFO: stderr: "" Jan 20 21:17:48.435: INFO: stdout: "e2e-test-crd-publish-openapi-1098-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 20 21:17:48.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3594 delete e2e-test-crd-publish-openapi-1098-crds test-cr' Jan 20 21:17:48.683: INFO: stderr: "" Jan 20 21:17:48.683: INFO: stdout: "e2e-test-crd-publish-openapi-1098-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 20 21:17:48.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3594 apply -f -' Jan 20 21:17:49.051: INFO: stderr: "" Jan 20 21:17:49.051: INFO: stdout: "e2e-test-crd-publish-openapi-1098-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 20 21:17:49.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3594 delete e2e-test-crd-publish-openapi-1098-crds test-cr' Jan 20 21:17:49.173: INFO: stderr: "" Jan 20 21:17:49.173: INFO: stdout: "e2e-test-crd-publish-openapi-1098-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 20 21:17:49.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1098-crds' Jan 20 21:17:49.526: INFO: stderr: "" Jan 20 21:17:49.527: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1098-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:17:53.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3594" for this suite. • [SLOW TEST:11.929 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":34,"skipped":643,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:17:53.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-04d94bde-3534-45df-b183-9942176b271e STEP: Creating a pod to test consume configMaps Jan 20 21:17:53.491: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13" in namespace "projected-5624" to be "success or failure" Jan 20 21:17:53.498: INFO: Pod "pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13": Phase="Pending", Reason="", readiness=false. Elapsed: 7.094986ms Jan 20 21:17:55.506: INFO: Pod "pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015378334s Jan 20 21:17:57.517: INFO: Pod "pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02569715s Jan 20 21:17:59.523: INFO: Pod "pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031716689s Jan 20 21:18:01.531: INFO: Pod "pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039526326s STEP: Saw pod success Jan 20 21:18:01.531: INFO: Pod "pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13" satisfied condition "success or failure" Jan 20 21:18:01.535: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13 container projected-configmap-volume-test: STEP: delete the pod Jan 20 21:18:01.621: INFO: Waiting for pod pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13 to disappear Jan 20 21:18:01.628: INFO: Pod pod-projected-configmaps-4e2440a1-930b-4973-94eb-4cc1ac2d5c13 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:18:01.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5624" for this suite. • [SLOW TEST:8.295 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":659,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:18:01.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5134.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5134.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5134.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5134.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5134.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5134.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 21:18:12.047: INFO: DNS probes using dns-5134/dns-test-c04844c9-2dd3-4884-8b4f-c0dcb91a9bc4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:18:12.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5134" for this suite. • [SLOW TEST:10.493 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":36,"skipped":666,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:18:12.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:18:13.215: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:18:15.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:18:17.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:18:19.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:18:21.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715151893, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:18:24.316: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:18:36.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-699" for this suite. STEP: Destroying namespace "webhook-699-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.664 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":37,"skipped":680,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:18:36.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-e56790db-143a-4bab-a8a0-6bf2b70649d8 STEP: Creating a pod to test consume secrets Jan 20 21:18:36.982: INFO: Waiting up to 5m0s for pod "pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994" in namespace "secrets-5542" to be "success or failure" Jan 20 21:18:37.010: INFO: Pod "pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994": Phase="Pending", Reason="", readiness=false. Elapsed: 27.163092ms Jan 20 21:18:39.017: INFO: Pod "pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034012116s Jan 20 21:18:41.056: INFO: Pod "pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073780757s Jan 20 21:18:43.070: INFO: Pod "pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087176166s Jan 20 21:18:45.081: INFO: Pod "pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098749255s Jan 20 21:18:47.092: INFO: Pod "pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109274054s STEP: Saw pod success Jan 20 21:18:47.092: INFO: Pod "pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994" satisfied condition "success or failure" Jan 20 21:18:47.097: INFO: Trying to get logs from node jerma-node pod pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994 container secret-volume-test: STEP: delete the pod Jan 20 21:18:47.167: INFO: Waiting for pod pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994 to disappear Jan 20 21:18:47.174: INFO: Pod pod-secrets-33cb8e32-c7b5-4166-baa1-d9fdbf7f8994 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:18:47.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5542" for this suite. • [SLOW TEST:10.385 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":684,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:18:47.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5099 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 21:18:47.354: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 21:19:23.567: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5099 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:19:23.567: INFO: >>> kubeConfig: /root/.kube/config I0120 21:19:23.627003 9 log.go:172] (0xc004d480b0) (0xc001d2b2c0) Create stream I0120 21:19:23.627287 9 log.go:172] (0xc004d480b0) (0xc001d2b2c0) Stream added, broadcasting: 1 I0120 21:19:23.640968 9 log.go:172] (0xc004d480b0) Reply frame received for 1 I0120 21:19:23.641291 9 log.go:172] (0xc004d480b0) (0xc001c18000) Create stream I0120 21:19:23.641340 9 log.go:172] (0xc004d480b0) (0xc001c18000) Stream added, broadcasting: 3 I0120 21:19:23.643263 9 log.go:172] (0xc004d480b0) Reply frame received for 3 I0120 21:19:23.643339 9 log.go:172] (0xc004d480b0) (0xc00297ce60) Create stream I0120 21:19:23.643360 9 log.go:172] (0xc004d480b0) (0xc00297ce60) Stream added, broadcasting: 5 I0120 21:19:23.649898 9 log.go:172] (0xc004d480b0) Reply frame received for 5 I0120 21:19:23.788517 9 log.go:172] (0xc004d480b0) Data frame received for 3 I0120 21:19:23.788656 9 log.go:172] (0xc001c18000) (3) Data frame handling I0120 21:19:23.788909 9 log.go:172] (0xc001c18000) (3) Data frame sent I0120 21:19:23.903483 9 log.go:172] (0xc004d480b0) (0xc001c18000) Stream removed, broadcasting: 3 I0120 21:19:23.903938 9 log.go:172] (0xc004d480b0) Data frame received for 1 I0120 21:19:23.903958 9 log.go:172] (0xc001d2b2c0) (1) Data frame handling I0120 21:19:23.903990 9 log.go:172] (0xc001d2b2c0) (1) Data frame sent I0120 21:19:23.904287 9 log.go:172] (0xc004d480b0) (0xc001d2b2c0) Stream removed, broadcasting: 1 I0120 21:19:23.904972 9 log.go:172] (0xc004d480b0) (0xc00297ce60) Stream removed, broadcasting: 5 I0120 21:19:23.905068 9 log.go:172] (0xc004d480b0) (0xc001d2b2c0) Stream removed, broadcasting: 1 I0120 21:19:23.905087 9 log.go:172] (0xc004d480b0) (0xc001c18000) Stream removed, broadcasting: 3 I0120 21:19:23.905098 9 log.go:172] (0xc004d480b0) (0xc00297ce60) Stream removed, broadcasting: 5 I0120 21:19:23.907186 9 log.go:172] (0xc004d480b0) Go away received Jan 20 21:19:23.907: INFO: Waiting for responses: map[] Jan 20 21:19:23.915: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5099 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:19:23.915: INFO: >>> kubeConfig: /root/.kube/config I0120 21:19:23.963929 9 log.go:172] (0xc0044de420) (0xc00297d2c0) Create stream I0120 21:19:23.964237 9 log.go:172] (0xc0044de420) (0xc00297d2c0) Stream added, broadcasting: 1 I0120 21:19:23.972877 9 log.go:172] (0xc0044de420) Reply frame received for 1 I0120 21:19:23.972949 9 log.go:172] (0xc0044de420) (0xc0019e7860) Create stream I0120 21:19:23.972976 9 log.go:172] (0xc0044de420) (0xc0019e7860) Stream added, broadcasting: 3 I0120 21:19:23.974074 9 log.go:172] (0xc0044de420) Reply frame received for 3 I0120 21:19:23.974099 9 log.go:172] (0xc0044de420) (0xc001c180a0) Create stream I0120 21:19:23.974110 9 log.go:172] (0xc0044de420) (0xc001c180a0) Stream added, broadcasting: 5 I0120 21:19:23.977047 9 log.go:172] (0xc0044de420) Reply frame received for 5 I0120 21:19:24.056643 9 log.go:172] (0xc0044de420) Data frame received for 3 I0120 21:19:24.056802 9 log.go:172] (0xc0019e7860) (3) Data frame handling I0120 21:19:24.056841 9 log.go:172] (0xc0019e7860) (3) Data frame sent I0120 21:19:24.160439 9 log.go:172] (0xc0044de420) Data frame received for 1 I0120 21:19:24.160697 9 log.go:172] (0xc00297d2c0) (1) Data frame handling I0120 21:19:24.160778 9 log.go:172] (0xc00297d2c0) (1) Data frame sent I0120 21:19:24.162318 9 log.go:172] (0xc0044de420) (0xc00297d2c0) Stream removed, broadcasting: 1 I0120 21:19:24.162421 9 log.go:172] (0xc0044de420) (0xc0019e7860) Stream removed, broadcasting: 3 I0120 21:19:24.162625 9 log.go:172] (0xc0044de420) (0xc001c180a0) Stream removed, broadcasting: 5 I0120 21:19:24.162771 9 log.go:172] (0xc0044de420) Go away received I0120 21:19:24.163599 9 log.go:172] (0xc0044de420) (0xc00297d2c0) Stream removed, broadcasting: 1 I0120 21:19:24.163863 9 log.go:172] (0xc0044de420) (0xc0019e7860) Stream removed, broadcasting: 3 I0120 21:19:24.163901 9 log.go:172] (0xc0044de420) (0xc001c180a0) Stream removed, broadcasting: 5 Jan 20 21:19:24.164: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:19:24.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5099" for this suite. • [SLOW TEST:36.997 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":695,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:19:24.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-744 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jan 20 21:19:24.433: INFO: Found 0 stateful pods, waiting for 3 Jan 20 21:19:34.441: INFO: Found 1 stateful pods, waiting for 3 Jan 20 21:19:44.446: INFO: Found 2 stateful pods, waiting for 3 Jan 20 21:19:54.443: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:19:54.443: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:19:54.443: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 20 21:19:54.482: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 20 21:20:04.554: INFO: Updating stateful set ss2 Jan 20 21:20:04.610: INFO: Waiting for Pod statefulset-744/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jan 20 21:20:15.438: INFO: Found 2 stateful pods, waiting for 3 Jan 20 21:20:25.449: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:20:25.449: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:20:25.449: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 20 21:20:35.449: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:20:35.449: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:20:35.449: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 20 21:20:35.487: INFO: Updating stateful set ss2 Jan 20 21:20:35.633: INFO: Waiting for Pod statefulset-744/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:20:46.008: INFO: Updating stateful set ss2 Jan 20 21:20:46.143: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update Jan 20 21:20:46.143: INFO: Waiting for Pod statefulset-744/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:20:56.157: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update Jan 20 21:20:56.158: INFO: Waiting for Pod statefulset-744/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:21:06.162: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 20 21:21:16.162: INFO: Deleting all statefulset in ns statefulset-744 Jan 20 21:21:16.168: INFO: Scaling statefulset ss2 to 0 Jan 20 21:21:36.221: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 21:21:36.230: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:21:36.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-744" for this suite. • [SLOW TEST:132.101 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":40,"skipped":707,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:21:36.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:21:36.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2352" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":716,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:21:36.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:21:36.668: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:21:44.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6506" for this suite. • [SLOW TEST:8.395 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":727,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:21:44.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:21:54.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5912" for this suite. • [SLOW TEST:9.359 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":43,"skipped":732,"failed":0} SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:21:54.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-ce2a9f1c-92b7-4717-88b5-4a7f4e3434dc STEP: Creating secret with name secret-projected-all-test-volume-08337437-cc40-4a6f-a675-c039f9050ddb STEP: Creating a pod to test Check all projections for projected volume plugin Jan 20 21:21:54.495: INFO: Waiting up to 5m0s for pod "projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af" in namespace "projected-4022" to be "success or failure" Jan 20 21:21:54.519: INFO: Pod "projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af": Phase="Pending", Reason="", readiness=false. Elapsed: 23.829409ms Jan 20 21:21:56.535: INFO: Pod "projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04018917s Jan 20 21:21:58.549: INFO: Pod "projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053714529s Jan 20 21:22:00.564: INFO: Pod "projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069148855s Jan 20 21:22:02.581: INFO: Pod "projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085743023s STEP: Saw pod success Jan 20 21:22:02.581: INFO: Pod "projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af" satisfied condition "success or failure" Jan 20 21:22:02.586: INFO: Trying to get logs from node jerma-node pod projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af container projected-all-volume-test: STEP: delete the pod Jan 20 21:22:02.647: INFO: Waiting for pod projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af to disappear Jan 20 21:22:02.651: INFO: Pod projected-volume-b3ffc8cf-c54f-43a8-a4f2-fa1a9a7951af no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:22:02.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4022" for this suite. • [SLOW TEST:8.345 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":44,"skipped":737,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:22:02.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-ec221587-6914-4972-834f-64e1acf9d683 STEP: Creating a pod to test consume secrets Jan 20 21:22:02.861: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64" in namespace "projected-4060" to be "success or failure" Jan 20 21:22:02.875: INFO: Pod "pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64": Phase="Pending", Reason="", readiness=false. Elapsed: 13.891975ms Jan 20 21:22:04.881: INFO: Pod "pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019665185s Jan 20 21:22:06.890: INFO: Pod "pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028865637s Jan 20 21:22:08.899: INFO: Pod "pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037425822s Jan 20 21:22:10.906: INFO: Pod "pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044664718s STEP: Saw pod success Jan 20 21:22:10.906: INFO: Pod "pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64" satisfied condition "success or failure" Jan 20 21:22:10.910: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64 container projected-secret-volume-test: STEP: delete the pod Jan 20 21:22:10.947: INFO: Waiting for pod pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64 to disappear Jan 20 21:22:10.954: INFO: Pod pod-projected-secrets-8ed84045-edb8-47b4-a2d9-e3c992037e64 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:22:10.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4060" for this suite. • [SLOW TEST:8.302 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":737,"failed":0} [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:22:10.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:22:11.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db" in namespace "projected-8430" to be "success or failure" Jan 20 21:22:11.176: INFO: Pod "downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db": Phase="Pending", Reason="", readiness=false. Elapsed: 38.222167ms Jan 20 21:22:13.183: INFO: Pod "downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04599594s Jan 20 21:22:15.195: INFO: Pod "downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057327363s Jan 20 21:22:17.201: INFO: Pod "downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063953925s Jan 20 21:22:19.210: INFO: Pod "downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072136982s STEP: Saw pod success Jan 20 21:22:19.210: INFO: Pod "downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db" satisfied condition "success or failure" Jan 20 21:22:19.215: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db container client-container: STEP: delete the pod Jan 20 21:22:19.374: INFO: Waiting for pod downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db to disappear Jan 20 21:22:19.386: INFO: Pod downwardapi-volume-17d3ceeb-da69-4d69-a367-0d2e31f547db no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:22:19.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8430" for this suite. • [SLOW TEST:8.438 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:22:19.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6647 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jan 20 21:22:19.567: INFO: Found 0 stateful pods, waiting for 3 Jan 20 21:22:29.805: INFO: Found 2 stateful pods, waiting for 3 Jan 20 21:22:39.578: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:22:39.578: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:22:39.578: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 20 21:22:49.577: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:22:49.577: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:22:49.577: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 20 21:22:49.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6647 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 21:22:50.111: INFO: stderr: "I0120 21:22:49.870075 724 log.go:172] (0xc0005ea000) (0xc0005c6000) Create stream\nI0120 21:22:49.870359 724 log.go:172] (0xc0005ea000) (0xc0005c6000) Stream added, broadcasting: 1\nI0120 21:22:49.878683 724 log.go:172] (0xc0005ea000) Reply frame received for 1\nI0120 21:22:49.878749 724 log.go:172] (0xc0005ea000) (0xc0006d3a40) Create stream\nI0120 21:22:49.878774 724 log.go:172] (0xc0005ea000) (0xc0006d3a40) Stream added, broadcasting: 3\nI0120 21:22:49.880330 724 log.go:172] (0xc0005ea000) Reply frame received for 3\nI0120 21:22:49.880440 724 log.go:172] (0xc0005ea000) (0xc0003d6000) Create stream\nI0120 21:22:49.880459 724 log.go:172] (0xc0005ea000) (0xc0003d6000) Stream added, broadcasting: 5\nI0120 21:22:49.883312 724 log.go:172] (0xc0005ea000) Reply frame received for 5\nI0120 21:22:49.965990 724 log.go:172] (0xc0005ea000) Data frame received for 5\nI0120 21:22:49.966057 724 log.go:172] (0xc0003d6000) (5) Data frame handling\nI0120 21:22:49.966076 724 log.go:172] (0xc0003d6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 21:22:49.995827 724 log.go:172] (0xc0005ea000) Data frame received for 3\nI0120 21:22:49.995855 724 log.go:172] (0xc0006d3a40) (3) Data frame handling\nI0120 21:22:49.995881 724 log.go:172] (0xc0006d3a40) (3) Data frame sent\nI0120 21:22:50.087600 724 log.go:172] (0xc0005ea000) (0xc0006d3a40) Stream removed, broadcasting: 3\nI0120 21:22:50.088120 724 log.go:172] (0xc0005ea000) Data frame received for 1\nI0120 21:22:50.088184 724 log.go:172] (0xc0005c6000) (1) Data frame handling\nI0120 21:22:50.088239 724 log.go:172] (0xc0005c6000) (1) Data frame sent\nI0120 21:22:50.088536 724 log.go:172] (0xc0005ea000) (0xc0003d6000) Stream removed, broadcasting: 5\nI0120 21:22:50.088685 724 log.go:172] (0xc0005ea000) (0xc0005c6000) Stream removed, broadcasting: 1\nI0120 21:22:50.088801 724 log.go:172] (0xc0005ea000) Go away received\nI0120 21:22:50.094042 724 log.go:172] (0xc0005ea000) (0xc0005c6000) Stream removed, broadcasting: 1\nI0120 21:22:50.094071 724 log.go:172] (0xc0005ea000) (0xc0006d3a40) Stream removed, broadcasting: 3\nI0120 21:22:50.094086 724 log.go:172] (0xc0005ea000) (0xc0003d6000) Stream removed, broadcasting: 5\n" Jan 20 21:22:50.111: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 21:22:50.111: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 20 21:23:00.161: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 20 21:23:10.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6647 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 21:23:10.705: INFO: stderr: "I0120 21:23:10.469011 746 log.go:172] (0xc000a26630) (0xc0009fe460) Create stream\nI0120 21:23:10.469505 746 log.go:172] (0xc000a26630) (0xc0009fe460) Stream added, broadcasting: 1\nI0120 21:23:10.491265 746 log.go:172] (0xc000a26630) Reply frame received for 1\nI0120 21:23:10.491563 746 log.go:172] (0xc000a26630) (0xc00092a000) Create stream\nI0120 21:23:10.491619 746 log.go:172] (0xc000a26630) (0xc00092a000) Stream added, broadcasting: 3\nI0120 21:23:10.494326 746 log.go:172] (0xc000a26630) Reply frame received for 3\nI0120 21:23:10.494516 746 log.go:172] (0xc000a26630) (0xc000a9c000) Create stream\nI0120 21:23:10.494588 746 log.go:172] (0xc000a26630) (0xc000a9c000) Stream added, broadcasting: 5\nI0120 21:23:10.495602 746 log.go:172] (0xc000a26630) Reply frame received for 5\nI0120 21:23:10.609070 746 log.go:172] (0xc000a26630) Data frame received for 5\nI0120 21:23:10.609319 746 log.go:172] (0xc000a9c000) (5) Data frame handling\nI0120 21:23:10.609353 746 log.go:172] (0xc000a9c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 21:23:10.609457 746 log.go:172] (0xc000a26630) Data frame received for 3\nI0120 21:23:10.609489 746 log.go:172] (0xc00092a000) (3) Data frame handling\nI0120 21:23:10.609540 746 log.go:172] (0xc00092a000) (3) Data frame sent\nI0120 21:23:10.691529 746 log.go:172] (0xc000a26630) (0xc000a9c000) Stream removed, broadcasting: 5\nI0120 21:23:10.691837 746 log.go:172] (0xc000a26630) Data frame received for 1\nI0120 21:23:10.691911 746 log.go:172] (0xc000a26630) (0xc00092a000) Stream removed, broadcasting: 3\nI0120 21:23:10.692026 746 log.go:172] (0xc0009fe460) (1) Data frame handling\nI0120 21:23:10.692042 746 log.go:172] (0xc0009fe460) (1) Data frame sent\nI0120 21:23:10.692054 746 log.go:172] (0xc000a26630) (0xc0009fe460) Stream removed, broadcasting: 1\nI0120 21:23:10.692101 746 log.go:172] (0xc000a26630) Go away received\nI0120 21:23:10.693512 746 log.go:172] (0xc000a26630) (0xc0009fe460) Stream removed, broadcasting: 1\nI0120 21:23:10.693533 746 log.go:172] (0xc000a26630) (0xc00092a000) Stream removed, broadcasting: 3\nI0120 21:23:10.693556 746 log.go:172] (0xc000a26630) (0xc000a9c000) Stream removed, broadcasting: 5\n" Jan 20 21:23:10.705: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 21:23:10.705: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 21:23:20.802: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update Jan 20 21:23:20.802: INFO: Waiting for Pod statefulset-6647/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:23:20.802: INFO: Waiting for Pod statefulset-6647/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:23:20.802: INFO: Waiting for Pod statefulset-6647/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:23:31.025: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update Jan 20 21:23:31.025: INFO: Waiting for Pod statefulset-6647/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:23:31.025: INFO: Waiting for Pod statefulset-6647/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:23:40.817: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update Jan 20 21:23:40.817: INFO: Waiting for Pod statefulset-6647/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:23:40.817: INFO: Waiting for Pod statefulset-6647/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:23:50.824: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update Jan 20 21:23:50.824: INFO: Waiting for Pod statefulset-6647/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 21:24:00.816: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update STEP: Rolling back to a previous revision Jan 20 21:24:10.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6647 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 21:24:11.257: INFO: stderr: "I0120 21:24:11.031046 768 log.go:172] (0xc000c1b290) (0xc000ae4500) Create stream\nI0120 21:24:11.031397 768 log.go:172] (0xc000c1b290) (0xc000ae4500) Stream added, broadcasting: 1\nI0120 21:24:11.053457 768 log.go:172] (0xc000c1b290) Reply frame received for 1\nI0120 21:24:11.053703 768 log.go:172] (0xc000c1b290) (0xc0008def00) Create stream\nI0120 21:24:11.053731 768 log.go:172] (0xc000c1b290) (0xc0008def00) Stream added, broadcasting: 3\nI0120 21:24:11.056523 768 log.go:172] (0xc000c1b290) Reply frame received for 3\nI0120 21:24:11.056566 768 log.go:172] (0xc000c1b290) (0xc0008defa0) Create stream\nI0120 21:24:11.056579 768 log.go:172] (0xc000c1b290) (0xc0008defa0) Stream added, broadcasting: 5\nI0120 21:24:11.057683 768 log.go:172] (0xc000c1b290) Reply frame received for 5\nI0120 21:24:11.131475 768 log.go:172] (0xc000c1b290) Data frame received for 5\nI0120 21:24:11.131744 768 log.go:172] (0xc0008defa0) (5) Data frame handling\nI0120 21:24:11.131787 768 log.go:172] (0xc0008defa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 21:24:11.180401 768 log.go:172] (0xc000c1b290) Data frame received for 3\nI0120 21:24:11.180448 768 log.go:172] (0xc0008def00) (3) Data frame handling\nI0120 21:24:11.180464 768 log.go:172] (0xc0008def00) (3) Data frame sent\nI0120 21:24:11.247456 768 log.go:172] (0xc000c1b290) Data frame received for 1\nI0120 21:24:11.247743 768 log.go:172] (0xc000ae4500) (1) Data frame handling\nI0120 21:24:11.247858 768 log.go:172] (0xc000ae4500) (1) Data frame sent\nI0120 21:24:11.248564 768 log.go:172] (0xc000c1b290) (0xc000ae4500) Stream removed, broadcasting: 1\nI0120 21:24:11.249207 768 log.go:172] (0xc000c1b290) (0xc0008def00) Stream removed, broadcasting: 3\nI0120 21:24:11.249325 768 log.go:172] (0xc000c1b290) (0xc0008defa0) Stream removed, broadcasting: 5\nI0120 21:24:11.249403 768 log.go:172] (0xc000c1b290) Go away received\nI0120 21:24:11.249875 768 log.go:172] (0xc000c1b290) (0xc000ae4500) Stream removed, broadcasting: 1\nI0120 21:24:11.249890 768 log.go:172] (0xc000c1b290) (0xc0008def00) Stream removed, broadcasting: 3\nI0120 21:24:11.249895 768 log.go:172] (0xc000c1b290) (0xc0008defa0) Stream removed, broadcasting: 5\n" Jan 20 21:24:11.257: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 21:24:11.257: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 21:24:21.306: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 20 21:24:31.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6647 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 21:24:31.814: INFO: stderr: "I0120 21:24:31.668814 788 log.go:172] (0xc00096a0b0) (0xc000796140) Create stream\nI0120 21:24:31.669047 788 log.go:172] (0xc00096a0b0) (0xc000796140) Stream added, broadcasting: 1\nI0120 21:24:31.672753 788 log.go:172] (0xc00096a0b0) Reply frame received for 1\nI0120 21:24:31.672789 788 log.go:172] (0xc00096a0b0) (0xc00084efa0) Create stream\nI0120 21:24:31.672798 788 log.go:172] (0xc00096a0b0) (0xc00084efa0) Stream added, broadcasting: 3\nI0120 21:24:31.673738 788 log.go:172] (0xc00096a0b0) Reply frame received for 3\nI0120 21:24:31.673757 788 log.go:172] (0xc00096a0b0) (0xc0002bbea0) Create stream\nI0120 21:24:31.673762 788 log.go:172] (0xc00096a0b0) (0xc0002bbea0) Stream added, broadcasting: 5\nI0120 21:24:31.675701 788 log.go:172] (0xc00096a0b0) Reply frame received for 5\nI0120 21:24:31.734216 788 log.go:172] (0xc00096a0b0) Data frame received for 5\nI0120 21:24:31.734249 788 log.go:172] (0xc0002bbea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 21:24:31.734284 788 log.go:172] (0xc00096a0b0) Data frame received for 3\nI0120 21:24:31.734313 788 log.go:172] (0xc00084efa0) (3) Data frame handling\nI0120 21:24:31.734346 788 log.go:172] (0xc00084efa0) (3) Data frame sent\nI0120 21:24:31.734362 788 log.go:172] (0xc0002bbea0) (5) Data frame sent\nI0120 21:24:31.805757 788 log.go:172] (0xc00096a0b0) Data frame received for 1\nI0120 21:24:31.805806 788 log.go:172] (0xc000796140) (1) Data frame handling\nI0120 21:24:31.805814 788 log.go:172] (0xc000796140) (1) Data frame sent\nI0120 21:24:31.805823 788 log.go:172] (0xc00096a0b0) (0xc000796140) Stream removed, broadcasting: 1\nI0120 21:24:31.806130 788 log.go:172] (0xc00096a0b0) (0xc00084efa0) Stream removed, broadcasting: 3\nI0120 21:24:31.807442 788 log.go:172] (0xc00096a0b0) (0xc0002bbea0) Stream removed, broadcasting: 5\nI0120 21:24:31.807545 788 log.go:172] (0xc00096a0b0) Go away received\nI0120 21:24:31.807667 788 log.go:172] (0xc00096a0b0) (0xc000796140) Stream removed, broadcasting: 1\nI0120 21:24:31.807689 788 log.go:172] (0xc00096a0b0) (0xc00084efa0) Stream removed, broadcasting: 3\nI0120 21:24:31.807698 788 log.go:172] (0xc00096a0b0) (0xc0002bbea0) Stream removed, broadcasting: 5\n" Jan 20 21:24:31.814: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 21:24:31.814: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 21:24:41.850: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update Jan 20 21:24:41.850: INFO: Waiting for Pod statefulset-6647/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 21:24:41.850: INFO: Waiting for Pod statefulset-6647/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 21:24:41.850: INFO: Waiting for Pod statefulset-6647/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 21:24:51.974: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update Jan 20 21:24:51.975: INFO: Waiting for Pod statefulset-6647/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 21:24:51.975: INFO: Waiting for Pod statefulset-6647/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 21:25:01.909: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update Jan 20 21:25:01.909: INFO: Waiting for Pod statefulset-6647/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 21:25:01.909: INFO: Waiting for Pod statefulset-6647/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 21:25:11.930: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update Jan 20 21:25:11.930: INFO: Waiting for Pod statefulset-6647/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 21:25:23.714: INFO: Waiting for StatefulSet statefulset-6647/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 20 21:25:31.875: INFO: Deleting all statefulset in ns statefulset-6647 Jan 20 21:25:31.885: INFO: Scaling statefulset ss2 to 0 Jan 20 21:26:11.926: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 21:26:11.933: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:26:11.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6647" for this suite. • [SLOW TEST:232.615 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":47,"skipped":765,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:26:12.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 20 21:26:12.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-773' Jan 20 21:26:12.331: INFO: stderr: "" Jan 20 21:26:12.331: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Jan 20 21:26:12.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-773' Jan 20 21:26:22.377: INFO: stderr: "" Jan 20 21:26:22.378: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:26:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-773" for this suite. • [SLOW TEST:10.378 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":48,"skipped":782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:26:22.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 20 21:26:22.524: INFO: Waiting up to 5m0s for pod "pod-6201ddbc-0506-486c-80c5-2488f816e8aa" in namespace "emptydir-196" to be "success or failure" Jan 20 21:26:22.540: INFO: Pod "pod-6201ddbc-0506-486c-80c5-2488f816e8aa": Phase="Pending", Reason="", readiness=false. Elapsed: 15.567453ms Jan 20 21:26:24.552: INFO: Pod "pod-6201ddbc-0506-486c-80c5-2488f816e8aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02809411s Jan 20 21:26:26.573: INFO: Pod "pod-6201ddbc-0506-486c-80c5-2488f816e8aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04872709s Jan 20 21:26:28.583: INFO: Pod "pod-6201ddbc-0506-486c-80c5-2488f816e8aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059054891s Jan 20 21:26:30.589: INFO: Pod "pod-6201ddbc-0506-486c-80c5-2488f816e8aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06501887s STEP: Saw pod success Jan 20 21:26:30.589: INFO: Pod "pod-6201ddbc-0506-486c-80c5-2488f816e8aa" satisfied condition "success or failure" Jan 20 21:26:30.596: INFO: Trying to get logs from node jerma-node pod pod-6201ddbc-0506-486c-80c5-2488f816e8aa container test-container: STEP: delete the pod Jan 20 21:26:30.769: INFO: Waiting for pod pod-6201ddbc-0506-486c-80c5-2488f816e8aa to disappear Jan 20 21:26:30.841: INFO: Pod pod-6201ddbc-0506-486c-80c5-2488f816e8aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:26:30.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-196" for this suite. • [SLOW TEST:8.459 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":807,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:26:30.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-54108c90-5b41-4479-9303-75bcf4765312 STEP: Creating a pod to test consume secrets Jan 20 21:26:31.010: INFO: Waiting up to 5m0s for pod "pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc" in namespace "secrets-5549" to be "success or failure" Jan 20 21:26:31.022: INFO: Pod "pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.001508ms Jan 20 21:26:33.029: INFO: Pod "pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01882237s Jan 20 21:26:35.036: INFO: Pod "pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026551788s Jan 20 21:26:37.046: INFO: Pod "pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035761474s Jan 20 21:26:39.060: INFO: Pod "pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050546387s STEP: Saw pod success Jan 20 21:26:39.061: INFO: Pod "pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc" satisfied condition "success or failure" Jan 20 21:26:39.068: INFO: Trying to get logs from node jerma-node pod pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc container secret-env-test: STEP: delete the pod Jan 20 21:26:39.131: INFO: Waiting for pod pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc to disappear Jan 20 21:26:39.142: INFO: Pod pod-secrets-99dba834-69e0-49c9-aaec-b4f7e8b64dfc no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:26:39.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5549" for this suite. • [SLOW TEST:8.335 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":808,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:26:39.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:26:39.277: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 20 21:26:41.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5840 create -f -' Jan 20 21:26:44.955: INFO: stderr: "" Jan 20 21:26:44.955: INFO: stdout: "e2e-test-crd-publish-openapi-7415-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 20 21:26:44.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5840 delete e2e-test-crd-publish-openapi-7415-crds test-cr' Jan 20 21:26:45.139: INFO: stderr: "" Jan 20 21:26:45.139: INFO: stdout: "e2e-test-crd-publish-openapi-7415-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 20 21:26:45.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5840 apply -f -' Jan 20 21:26:45.541: INFO: stderr: "" Jan 20 21:26:45.542: INFO: stdout: "e2e-test-crd-publish-openapi-7415-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 20 21:26:45.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5840 delete e2e-test-crd-publish-openapi-7415-crds test-cr' Jan 20 21:26:45.687: INFO: stderr: "" Jan 20 21:26:45.687: INFO: stdout: "e2e-test-crd-publish-openapi-7415-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 20 21:26:45.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7415-crds' Jan 20 21:26:46.188: INFO: stderr: "" Jan 20 21:26:46.189: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7415-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:26:48.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5840" for this suite. • [SLOW TEST:9.248 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":51,"skipped":812,"failed":0} [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:26:48.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 20 21:26:57.142: INFO: Successfully updated pod "labelsupdate10bcf230-3bf1-43f2-92f5-9cd0300de8cf" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:26:59.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9375" for this suite. • [SLOW TEST:10.790 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":812,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:26:59.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 20 21:26:59.346: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:27:10.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5733" for this suite. • [SLOW TEST:11.270 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":53,"skipped":819,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:27:10.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:27:18.845: INFO: Waiting up to 5m0s for pod "client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7" in namespace "pods-6334" to be "success or failure" Jan 20 21:27:18.869: INFO: Pod "client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 23.562622ms Jan 20 21:27:20.878: INFO: Pod "client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033321293s Jan 20 21:27:22.888: INFO: Pod "client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042733225s Jan 20 21:27:24.896: INFO: Pod "client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050592057s Jan 20 21:27:26.904: INFO: Pod "client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058793714s STEP: Saw pod success Jan 20 21:27:26.904: INFO: Pod "client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7" satisfied condition "success or failure" Jan 20 21:27:26.908: INFO: Trying to get logs from node jerma-node pod client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7 container env3cont: STEP: delete the pod Jan 20 21:27:26.943: INFO: Waiting for pod client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7 to disappear Jan 20 21:27:26.960: INFO: Pod client-envvars-1c7a06cf-95ba-42fc-940d-f2bd0ca10bb7 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:27:26.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6334" for this suite. • [SLOW TEST:16.466 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":829,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:27:26.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 20 21:27:27.204: INFO: Waiting up to 5m0s for pod "pod-849196c8-5737-4582-b009-e8750227a0d1" in namespace "emptydir-683" to be "success or failure" Jan 20 21:27:27.215: INFO: Pod "pod-849196c8-5737-4582-b009-e8750227a0d1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.993999ms Jan 20 21:27:29.229: INFO: Pod "pod-849196c8-5737-4582-b009-e8750227a0d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024325832s Jan 20 21:27:31.237: INFO: Pod "pod-849196c8-5737-4582-b009-e8750227a0d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03230612s Jan 20 21:27:33.244: INFO: Pod "pod-849196c8-5737-4582-b009-e8750227a0d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03971939s Jan 20 21:27:35.252: INFO: Pod "pod-849196c8-5737-4582-b009-e8750227a0d1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047488654s Jan 20 21:27:37.264: INFO: Pod "pod-849196c8-5737-4582-b009-e8750227a0d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059470052s STEP: Saw pod success Jan 20 21:27:37.264: INFO: Pod "pod-849196c8-5737-4582-b009-e8750227a0d1" satisfied condition "success or failure" Jan 20 21:27:37.273: INFO: Trying to get logs from node jerma-node pod pod-849196c8-5737-4582-b009-e8750227a0d1 container test-container: STEP: delete the pod Jan 20 21:27:37.606: INFO: Waiting for pod pod-849196c8-5737-4582-b009-e8750227a0d1 to disappear Jan 20 21:27:37.619: INFO: Pod pod-849196c8-5737-4582-b009-e8750227a0d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:27:37.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-683" for this suite. • [SLOW TEST:10.654 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:27:37.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Jan 20 21:27:37.867: INFO: Waiting up to 5m0s for pod "client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0" in namespace "containers-8761" to be "success or failure" Jan 20 21:27:37.886: INFO: Pod "client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.314ms Jan 20 21:27:39.899: INFO: Pod "client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031591986s Jan 20 21:27:41.909: INFO: Pod "client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042036632s Jan 20 21:27:44.501: INFO: Pod "client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634131292s Jan 20 21:27:46.509: INFO: Pod "client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.642087286s STEP: Saw pod success Jan 20 21:27:46.509: INFO: Pod "client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0" satisfied condition "success or failure" Jan 20 21:27:46.513: INFO: Trying to get logs from node jerma-node pod client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0 container test-container: STEP: delete the pod Jan 20 21:27:46.554: INFO: Waiting for pod client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0 to disappear Jan 20 21:27:46.595: INFO: Pod client-containers-d42ad5ad-4332-47ff-bab7-8bfb205bc4d0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:27:46.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8761" for this suite. • [SLOW TEST:9.060 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":874,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:27:46.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:27:54.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2183" for this suite. • [SLOW TEST:8.221 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":878,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:27:54.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:27:55.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2980" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":58,"skipped":879,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:27:55.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-557 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-557 Jan 20 21:27:55.193: INFO: Found 0 stateful pods, waiting for 1 Jan 20 21:28:05.203: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 20 21:28:05.250: INFO: Deleting all statefulset in ns statefulset-557 Jan 20 21:28:05.360: INFO: Scaling statefulset ss to 0 Jan 20 21:28:25.506: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 21:28:25.511: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:28:25.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-557" for this suite. • [SLOW TEST:30.655 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":59,"skipped":882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:28:25.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 20 21:28:25.831: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 20 21:28:25.891: INFO: Waiting for terminating namespaces to be deleted... Jan 20 21:28:25.896: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 20 21:28:25.907: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.907: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 21:28:25.907: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 20 21:28:25.907: INFO: Container weave ready: true, restart count 1 Jan 20 21:28:25.907: INFO: Container weave-npc ready: true, restart count 0 Jan 20 21:28:25.907: INFO: busybox-scheduling-8de0acc7-f4cc-4caa-8fc4-af5254d55bb2 from kubelet-test-2183 started at 2020-01-20 21:27:46 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.907: INFO: Container busybox-scheduling-8de0acc7-f4cc-4caa-8fc4-af5254d55bb2 ready: true, restart count 0 Jan 20 21:28:25.907: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 20 21:28:25.934: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.934: INFO: Container kube-scheduler ready: true, restart count 3 Jan 20 21:28:25.934: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.934: INFO: Container kube-apiserver ready: true, restart count 1 Jan 20 21:28:25.934: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.934: INFO: Container etcd ready: true, restart count 1 Jan 20 21:28:25.934: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.934: INFO: Container coredns ready: true, restart count 0 Jan 20 21:28:25.934: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.934: INFO: Container coredns ready: true, restart count 0 Jan 20 21:28:25.934: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.934: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 20 21:28:25.934: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 20 21:28:25.934: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 21:28:25.934: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 20 21:28:25.934: INFO: Container weave ready: true, restart count 0 Jan 20 21:28:25.934: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ebb58bb8aba449], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:28:27.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4994" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":60,"skipped":911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:28:27.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-2076 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2076 STEP: Deleting pre-stop pod Jan 20 21:28:50.288: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:28:50.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2076" for this suite. • [SLOW TEST:23.263 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":61,"skipped":970,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:28:50.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 20 21:28:50.544: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8484 /api/v1/namespaces/watch-8484/configmaps/e2e-watch-test-label-changed 308341df-185f-471f-894a-68bdfebcfcfa 3252571 0 2020-01-20 21:28:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 20 21:28:50.545: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8484 /api/v1/namespaces/watch-8484/configmaps/e2e-watch-test-label-changed 308341df-185f-471f-894a-68bdfebcfcfa 3252572 0 2020-01-20 21:28:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 20 21:28:50.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8484 /api/v1/namespaces/watch-8484/configmaps/e2e-watch-test-label-changed 308341df-185f-471f-894a-68bdfebcfcfa 3252573 0 2020-01-20 21:28:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 20 21:29:00.615: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8484 /api/v1/namespaces/watch-8484/configmaps/e2e-watch-test-label-changed 308341df-185f-471f-894a-68bdfebcfcfa 3252612 0 2020-01-20 21:28:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 20 21:29:00.616: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8484 /api/v1/namespaces/watch-8484/configmaps/e2e-watch-test-label-changed 308341df-185f-471f-894a-68bdfebcfcfa 3252613 0 2020-01-20 21:28:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 20 21:29:00.616: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8484 /api/v1/namespaces/watch-8484/configmaps/e2e-watch-test-label-changed 308341df-185f-471f-894a-68bdfebcfcfa 3252614 0 2020-01-20 21:28:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:29:00.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8484" for this suite. • [SLOW TEST:10.340 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":62,"skipped":1003,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:29:00.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:29:00.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad" in namespace "projected-6309" to be "success or failure" Jan 20 21:29:00.859: INFO: Pod "downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad": Phase="Pending", Reason="", readiness=false. Elapsed: 32.713969ms Jan 20 21:29:02.879: INFO: Pod "downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052830835s Jan 20 21:29:04.887: INFO: Pod "downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061179716s Jan 20 21:29:06.928: INFO: Pod "downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102027661s Jan 20 21:29:08.936: INFO: Pod "downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110533636s STEP: Saw pod success Jan 20 21:29:08.937: INFO: Pod "downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad" satisfied condition "success or failure" Jan 20 21:29:08.943: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad container client-container: STEP: delete the pod Jan 20 21:29:09.032: INFO: Waiting for pod downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad to disappear Jan 20 21:29:09.039: INFO: Pod downwardapi-volume-8692658e-a2d3-48eb-9939-6dd379b079ad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:29:09.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6309" for this suite. • [SLOW TEST:8.395 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1051,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:29:09.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-16e4648c-2dbe-4114-8a7a-dfa13684ac80 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:29:21.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1178" for this suite. • [SLOW TEST:12.203 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1056,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:29:21.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:29:22.103: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:29:24.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:29:26.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:29:28.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:29:30.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152562, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:29:33.185: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:29:34.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7668" for this suite. STEP: Destroying namespace "webhook-7668-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.908 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":65,"skipped":1068,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:29:34.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:29:45.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1558" for this suite. • [SLOW TEST:11.231 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":66,"skipped":1071,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:29:45.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9827 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 21:29:45.512: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 21:30:19.786: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9827 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:30:19.787: INFO: >>> kubeConfig: /root/.kube/config I0120 21:30:19.864843 9 log.go:172] (0xc004d48580) (0xc00297c820) Create stream I0120 21:30:19.865053 9 log.go:172] (0xc004d48580) (0xc00297c820) Stream added, broadcasting: 1 I0120 21:30:19.881354 9 log.go:172] (0xc004d48580) Reply frame received for 1 I0120 21:30:19.881523 9 log.go:172] (0xc004d48580) (0xc001e1ea00) Create stream I0120 21:30:19.881537 9 log.go:172] (0xc004d48580) (0xc001e1ea00) Stream added, broadcasting: 3 I0120 21:30:19.888324 9 log.go:172] (0xc004d48580) Reply frame received for 3 I0120 21:30:19.888373 9 log.go:172] (0xc004d48580) (0xc00202abe0) Create stream I0120 21:30:19.888387 9 log.go:172] (0xc004d48580) (0xc00202abe0) Stream added, broadcasting: 5 I0120 21:30:19.890373 9 log.go:172] (0xc004d48580) Reply frame received for 5 I0120 21:30:20.974340 9 log.go:172] (0xc004d48580) Data frame received for 3 I0120 21:30:20.974441 9 log.go:172] (0xc001e1ea00) (3) Data frame handling I0120 21:30:20.974475 9 log.go:172] (0xc001e1ea00) (3) Data frame sent I0120 21:30:21.056503 9 log.go:172] (0xc004d48580) Data frame received for 1 I0120 21:30:21.056758 9 log.go:172] (0xc004d48580) (0xc001e1ea00) Stream removed, broadcasting: 3 I0120 21:30:21.056972 9 log.go:172] (0xc00297c820) (1) Data frame handling I0120 21:30:21.057031 9 log.go:172] (0xc00297c820) (1) Data frame sent I0120 21:30:21.057047 9 log.go:172] (0xc004d48580) (0xc00202abe0) Stream removed, broadcasting: 5 I0120 21:30:21.057262 9 log.go:172] (0xc004d48580) (0xc00297c820) Stream removed, broadcasting: 1 I0120 21:30:21.057327 9 log.go:172] (0xc004d48580) Go away received I0120 21:30:21.057924 9 log.go:172] (0xc004d48580) (0xc00297c820) Stream removed, broadcasting: 1 I0120 21:30:21.057949 9 log.go:172] (0xc004d48580) (0xc001e1ea00) Stream removed, broadcasting: 3 I0120 21:30:21.057957 9 log.go:172] (0xc004d48580) (0xc00202abe0) Stream removed, broadcasting: 5 Jan 20 21:30:21.058: INFO: Found all expected endpoints: [netserver-0] Jan 20 21:30:21.070: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9827 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:30:21.070: INFO: >>> kubeConfig: /root/.kube/config I0120 21:30:21.107282 9 log.go:172] (0xc0029628f0) (0xc002327720) Create stream I0120 21:30:21.107417 9 log.go:172] (0xc0029628f0) (0xc002327720) Stream added, broadcasting: 1 I0120 21:30:21.112571 9 log.go:172] (0xc0029628f0) Reply frame received for 1 I0120 21:30:21.112681 9 log.go:172] (0xc0029628f0) (0xc0029fe140) Create stream I0120 21:30:21.112696 9 log.go:172] (0xc0029628f0) (0xc0029fe140) Stream added, broadcasting: 3 I0120 21:30:21.115029 9 log.go:172] (0xc0029628f0) Reply frame received for 3 I0120 21:30:21.115078 9 log.go:172] (0xc0029628f0) (0xc002327900) Create stream I0120 21:30:21.115085 9 log.go:172] (0xc0029628f0) (0xc002327900) Stream added, broadcasting: 5 I0120 21:30:21.116258 9 log.go:172] (0xc0029628f0) Reply frame received for 5 I0120 21:30:22.210396 9 log.go:172] (0xc0029628f0) Data frame received for 3 I0120 21:30:22.210608 9 log.go:172] (0xc0029fe140) (3) Data frame handling I0120 21:30:22.210653 9 log.go:172] (0xc0029fe140) (3) Data frame sent I0120 21:30:22.296497 9 log.go:172] (0xc0029628f0) (0xc0029fe140) Stream removed, broadcasting: 3 I0120 21:30:22.297234 9 log.go:172] (0xc0029628f0) Data frame received for 1 I0120 21:30:22.297654 9 log.go:172] (0xc0029628f0) (0xc002327900) Stream removed, broadcasting: 5 I0120 21:30:22.298046 9 log.go:172] (0xc002327720) (1) Data frame handling I0120 21:30:22.298136 9 log.go:172] (0xc002327720) (1) Data frame sent I0120 21:30:22.298176 9 log.go:172] (0xc0029628f0) (0xc002327720) Stream removed, broadcasting: 1 I0120 21:30:22.298215 9 log.go:172] (0xc0029628f0) Go away received I0120 21:30:22.298951 9 log.go:172] (0xc0029628f0) (0xc002327720) Stream removed, broadcasting: 1 I0120 21:30:22.299080 9 log.go:172] (0xc0029628f0) (0xc0029fe140) Stream removed, broadcasting: 3 I0120 21:30:22.299128 9 log.go:172] (0xc0029628f0) (0xc002327900) Stream removed, broadcasting: 5 Jan 20 21:30:22.299: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:30:22.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9827" for this suite. • [SLOW TEST:36.922 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:30:22.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 20 21:30:29.452: INFO: 10 pods remaining Jan 20 21:30:29.452: INFO: 10 pods has nil DeletionTimestamp Jan 20 21:30:29.452: INFO: Jan 20 21:30:30.551: INFO: 10 pods remaining Jan 20 21:30:30.551: INFO: 9 pods has nil DeletionTimestamp Jan 20 21:30:30.551: INFO: Jan 20 21:30:31.053: INFO: 0 pods remaining Jan 20 21:30:31.053: INFO: 0 pods has nil DeletionTimestamp Jan 20 21:30:31.053: INFO: STEP: Gathering metrics W0120 21:30:32.275335 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 21:30:32.275: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:30:32.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6101" for this suite. • [SLOW TEST:10.264 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":68,"skipped":1109,"failed":0} SSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:30:32.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:31:19.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-96" for this suite. • [SLOW TEST:46.618 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":69,"skipped":1114,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:31:19.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 20 21:31:19.362: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jan 20 21:31:19.872: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 20 21:31:22.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:31:24.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:31:26.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:31:28.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:31:30.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152680, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152679, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:31:32.875: INFO: Waited 828.518972ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:31:33.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9408" for this suite. • [SLOW TEST:14.428 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":70,"skipped":1127,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:31:33.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-939c6e7c-54b4-4418-88e4-a2144df177cb STEP: Creating a pod to test consume configMaps Jan 20 21:31:33.810: INFO: Waiting up to 5m0s for pod "pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952" in namespace "configmap-6506" to be "success or failure" Jan 20 21:31:33.970: INFO: Pod "pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952": Phase="Pending", Reason="", readiness=false. Elapsed: 160.355566ms Jan 20 21:31:35.977: INFO: Pod "pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167654715s Jan 20 21:31:37.984: INFO: Pod "pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17376797s Jan 20 21:31:39.991: INFO: Pod "pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181438536s Jan 20 21:31:42.009: INFO: Pod "pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198863531s Jan 20 21:31:44.016: INFO: Pod "pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.206630069s STEP: Saw pod success Jan 20 21:31:44.017: INFO: Pod "pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952" satisfied condition "success or failure" Jan 20 21:31:44.020: INFO: Trying to get logs from node jerma-node pod pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952 container configmap-volume-test: STEP: delete the pod Jan 20 21:31:44.212: INFO: Waiting for pod pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952 to disappear Jan 20 21:31:44.220: INFO: Pod pod-configmaps-24e3fe8b-2e41-438a-8025-9ca6a81b6952 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:31:44.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6506" for this suite. • [SLOW TEST:10.589 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1141,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:31:44.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 20 21:31:44.499: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 20 21:31:49.537: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:31:49.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1028" for this suite. • [SLOW TEST:5.640 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":72,"skipped":1159,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:31:49.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:31:49.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f" in namespace "downward-api-1839" to be "success or failure" Jan 20 21:31:50.021: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.825141ms Jan 20 21:31:52.030: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056201252s Jan 20 21:31:54.047: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072454983s Jan 20 21:31:56.092: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117553294s Jan 20 21:31:58.100: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125352696s Jan 20 21:32:00.109: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.134813413s Jan 20 21:32:02.126: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.151953165s Jan 20 21:32:04.141: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.166379655s STEP: Saw pod success Jan 20 21:32:04.141: INFO: Pod "downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f" satisfied condition "success or failure" Jan 20 21:32:04.146: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f container client-container: STEP: delete the pod Jan 20 21:32:04.232: INFO: Waiting for pod downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f to disappear Jan 20 21:32:04.246: INFO: Pod downwardapi-volume-6505a95e-0797-4dee-99d5-4d52b144470f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:32:04.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1839" for this suite. • [SLOW TEST:14.381 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1160,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:32:04.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9f637efc-90cd-4e4d-ae67-a8b7fbfe314f STEP: Creating a pod to test consume configMaps Jan 20 21:32:04.437: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307" in namespace "configmap-6039" to be "success or failure" Jan 20 21:32:04.450: INFO: Pod "pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307": Phase="Pending", Reason="", readiness=false. Elapsed: 13.394667ms Jan 20 21:32:06.463: INFO: Pod "pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026234897s Jan 20 21:32:08.481: INFO: Pod "pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044402985s Jan 20 21:32:10.499: INFO: Pod "pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062158715s Jan 20 21:32:12.511: INFO: Pod "pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073875348s STEP: Saw pod success Jan 20 21:32:12.511: INFO: Pod "pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307" satisfied condition "success or failure" Jan 20 21:32:12.515: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307 container configmap-volume-test: STEP: delete the pod Jan 20 21:32:12.579: INFO: Waiting for pod pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307 to disappear Jan 20 21:32:12.585: INFO: Pod pod-configmaps-a5add3fb-828f-432b-a847-a36c676cc307 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:32:12.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6039" for this suite. • [SLOW TEST:8.343 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1161,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:32:12.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 20 21:32:12.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5888' Jan 20 21:32:12.873: INFO: stderr: "" Jan 20 21:32:12.873: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 20 21:32:22.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5888 -o json' Jan 20 21:32:23.121: INFO: stderr: "" Jan 20 21:32:23.122: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-20T21:32:12Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5888\",\n \"resourceVersion\": \"3253689\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5888/pods/e2e-test-httpd-pod\",\n \"uid\": \"c2669a5a-2326-4b97-9bdc-18a25e8d402b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sgnm8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sgnm8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sgnm8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-20T21:32:12Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-20T21:32:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-20T21:32:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-20T21:32:12Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://32e7516a01ce52ab6ffa86884c6cb2b03319792a330d06bcd21e983828346f26\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-20T21:32:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.1\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-20T21:32:12Z\"\n }\n}\n" STEP: replace the image in the pod Jan 20 21:32:23.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5888' Jan 20 21:32:23.896: INFO: stderr: "" Jan 20 21:32:23.896: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882 Jan 20 21:32:23.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5888' Jan 20 21:32:30.451: INFO: stderr: "" Jan 20 21:32:30.451: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:32:30.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5888" for this suite. • [SLOW TEST:17.908 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":75,"skipped":1170,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:32:30.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 20 21:32:39.748: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:32:40.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2218" for this suite. • [SLOW TEST:10.293 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":76,"skipped":1172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:32:40.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-cd9c57cf-826f-4bc8-87a1-d1d89064871b STEP: Creating a pod to test consume configMaps Jan 20 21:32:41.082: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5" in namespace "projected-237" to be "success or failure" Jan 20 21:32:41.090: INFO: Pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015306ms Jan 20 21:32:43.101: INFO: Pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018400199s Jan 20 21:32:45.108: INFO: Pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025608594s Jan 20 21:32:47.115: INFO: Pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032663856s Jan 20 21:32:49.124: INFO: Pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041496906s Jan 20 21:32:51.136: INFO: Pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053249563s Jan 20 21:32:53.146: INFO: Pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.063877733s STEP: Saw pod success Jan 20 21:32:53.147: INFO: Pod "pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5" satisfied condition "success or failure" Jan 20 21:32:53.151: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5 container projected-configmap-volume-test: STEP: delete the pod Jan 20 21:32:53.208: INFO: Waiting for pod pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5 to disappear Jan 20 21:32:53.214: INFO: Pod pod-projected-configmaps-2a6a58b1-251a-4e53-b8f2-785e32e507f5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:32:53.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-237" for this suite. • [SLOW TEST:12.414 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:32:53.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 20 21:32:53.493: INFO: Waiting up to 5m0s for pod "pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f" in namespace "emptydir-3310" to be "success or failure" Jan 20 21:32:53.561: INFO: Pod "pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 68.527088ms Jan 20 21:32:55.572: INFO: Pod "pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079027153s Jan 20 21:32:57.581: INFO: Pod "pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088256063s Jan 20 21:33:00.401: INFO: Pod "pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.908635071s Jan 20 21:33:02.408: INFO: Pod "pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.914873281s STEP: Saw pod success Jan 20 21:33:02.408: INFO: Pod "pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f" satisfied condition "success or failure" Jan 20 21:33:02.412: INFO: Trying to get logs from node jerma-node pod pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f container test-container: STEP: delete the pod Jan 20 21:33:02.610: INFO: Waiting for pod pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f to disappear Jan 20 21:33:02.620: INFO: Pod pod-8e68f38b-ba38-45b9-b6c9-62c06cca8d4f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:33:02.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3310" for this suite. • [SLOW TEST:9.421 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1268,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:33:02.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 20 21:33:02.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5739' Jan 20 21:33:03.018: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 21:33:03.018: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Jan 20 21:33:05.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5739' Jan 20 21:33:05.284: INFO: stderr: "" Jan 20 21:33:05.284: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:33:05.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5739" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":79,"skipped":1269,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:33:05.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 20 21:33:05.964: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 20 21:33:07.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152786, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:33:10.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152786, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:33:11.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152786, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:33:13.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152786, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152785, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:33:17.041: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:33:17.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:33:18.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6704" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.363 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":80,"skipped":1272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:33:18.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Jan 20 21:33:18.927: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5397" to be "success or failure" Jan 20 21:33:19.036: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 108.026879ms Jan 20 21:33:21.044: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116298194s Jan 20 21:33:23.055: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127127349s Jan 20 21:33:25.065: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137518092s Jan 20 21:33:27.074: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146338088s Jan 20 21:33:29.080: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.152209374s Jan 20 21:33:31.086: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.158110862s Jan 20 21:33:33.094: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.166797031s STEP: Saw pod success Jan 20 21:33:33.095: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 20 21:33:33.099: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 20 21:33:33.144: INFO: Waiting for pod pod-host-path-test to disappear Jan 20 21:33:33.155: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:33:33.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5397" for this suite. • [SLOW TEST:14.513 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1315,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:33:33.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:33:33.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4" in namespace "projected-3117" to be "success or failure" Jan 20 21:33:33.419: INFO: Pod "downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4": Phase="Pending", Reason="", readiness=false. Elapsed: 64.191902ms Jan 20 21:33:35.430: INFO: Pod "downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074456881s Jan 20 21:33:37.440: INFO: Pod "downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084243335s Jan 20 21:33:39.450: INFO: Pod "downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095130891s Jan 20 21:33:41.461: INFO: Pod "downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105647993s STEP: Saw pod success Jan 20 21:33:41.461: INFO: Pod "downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4" satisfied condition "success or failure" Jan 20 21:33:41.465: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4 container client-container: STEP: delete the pod Jan 20 21:33:41.521: INFO: Waiting for pod downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4 to disappear Jan 20 21:33:41.584: INFO: Pod downwardapi-volume-9e69687c-a130-44ab-b455-040495b372a4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:33:41.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3117" for this suite. • [SLOW TEST:8.437 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1326,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:33:41.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 20 21:33:41.783: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:33:53.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7657" for this suite. • [SLOW TEST:11.961 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":83,"skipped":1331,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:33:53.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-8c244811-c4f6-4a57-b715-4809def65058 in namespace container-probe-8950 Jan 20 21:34:03.764: INFO: Started pod liveness-8c244811-c4f6-4a57-b715-4809def65058 in namespace container-probe-8950 STEP: checking the pod's current state and verifying that restartCount is present Jan 20 21:34:03.769: INFO: Initial restart count of pod liveness-8c244811-c4f6-4a57-b715-4809def65058 is 0 Jan 20 21:34:21.920: INFO: Restart count of pod container-probe-8950/liveness-8c244811-c4f6-4a57-b715-4809def65058 is now 1 (18.151575677s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:34:21.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8950" for this suite. • [SLOW TEST:28.445 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1332,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:34:22.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7483.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7483.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 21:34:36.374: INFO: DNS probes using dns-test-f109d862-d097-42d9-a834-c0d84dc966bc succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7483.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7483.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 21:34:48.666: INFO: File wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local from pod dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 21:34:48.674: INFO: File jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local from pod dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 21:34:48.674: INFO: Lookups using dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 failed for: [wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local] Jan 20 21:34:53.688: INFO: File wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local from pod dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 21:34:53.700: INFO: File jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local from pod dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 21:34:53.700: INFO: Lookups using dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 failed for: [wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local] Jan 20 21:34:58.687: INFO: File wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local from pod dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 21:34:58.712: INFO: File jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local from pod dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 21:34:58.712: INFO: Lookups using dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 failed for: [wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local] Jan 20 21:35:03.686: INFO: File wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local from pod dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 21:35:03.692: INFO: File jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local from pod dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 20 21:35:03.692: INFO: Lookups using dns-7483/dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 failed for: [wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local] Jan 20 21:35:08.693: INFO: DNS probes using dns-test-625f7b8f-9291-4b0d-885d-1ff6285b35e8 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7483.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7483.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7483.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7483.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 21:35:23.065: INFO: DNS probes using dns-test-27a9657d-4565-44a0-989b-efbfb680a1fc succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:35:23.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7483" for this suite. • [SLOW TEST:61.285 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":85,"skipped":1340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:35:23.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 20 21:35:23.696: INFO: Waiting up to 5m0s for pod "downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57" in namespace "downward-api-6909" to be "success or failure" Jan 20 21:35:23.705: INFO: Pod "downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57": Phase="Pending", Reason="", readiness=false. Elapsed: 9.373616ms Jan 20 21:35:25.714: INFO: Pod "downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018414635s Jan 20 21:35:27.724: INFO: Pod "downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027593314s Jan 20 21:35:29.730: INFO: Pod "downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03443809s Jan 20 21:35:31.752: INFO: Pod "downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05628074s Jan 20 21:35:33.760: INFO: Pod "downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063877433s STEP: Saw pod success Jan 20 21:35:33.760: INFO: Pod "downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57" satisfied condition "success or failure" Jan 20 21:35:33.764: INFO: Trying to get logs from node jerma-node pod downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57 container dapi-container: STEP: delete the pod Jan 20 21:35:33.888: INFO: Waiting for pod downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57 to disappear Jan 20 21:35:33.896: INFO: Pod downward-api-04ab8224-b1b0-44b2-a912-9653aead6e57 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:35:33.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6909" for this suite. • [SLOW TEST:10.603 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:35:33.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Jan 20 21:35:34.146: INFO: Waiting up to 5m0s for pod "var-expansion-ad952948-eb37-44cc-8f38-05828731cee4" in namespace "var-expansion-3634" to be "success or failure" Jan 20 21:35:34.180: INFO: Pod "var-expansion-ad952948-eb37-44cc-8f38-05828731cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.345819ms Jan 20 21:35:36.187: INFO: Pod "var-expansion-ad952948-eb37-44cc-8f38-05828731cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04095693s Jan 20 21:35:38.195: INFO: Pod "var-expansion-ad952948-eb37-44cc-8f38-05828731cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049071023s Jan 20 21:35:40.226: INFO: Pod "var-expansion-ad952948-eb37-44cc-8f38-05828731cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080573222s Jan 20 21:35:42.262: INFO: Pod "var-expansion-ad952948-eb37-44cc-8f38-05828731cee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116402163s STEP: Saw pod success Jan 20 21:35:42.262: INFO: Pod "var-expansion-ad952948-eb37-44cc-8f38-05828731cee4" satisfied condition "success or failure" Jan 20 21:35:42.265: INFO: Trying to get logs from node jerma-node pod var-expansion-ad952948-eb37-44cc-8f38-05828731cee4 container dapi-container: STEP: delete the pod Jan 20 21:35:42.383: INFO: Waiting for pod var-expansion-ad952948-eb37-44cc-8f38-05828731cee4 to disappear Jan 20 21:35:42.416: INFO: Pod var-expansion-ad952948-eb37-44cc-8f38-05828731cee4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:35:42.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3634" for this suite. • [SLOW TEST:8.516 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1402,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:35:42.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-qckh STEP: Creating a pod to test atomic-volume-subpath Jan 20 21:35:42.587: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qckh" in namespace "subpath-4825" to be "success or failure" Jan 20 21:35:42.601: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Pending", Reason="", readiness=false. Elapsed: 13.656133ms Jan 20 21:35:44.621: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034029162s Jan 20 21:35:46.630: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042305031s Jan 20 21:35:48.638: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 6.050348234s Jan 20 21:35:50.646: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 8.059082296s Jan 20 21:35:52.659: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 10.071653116s Jan 20 21:35:54.665: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 12.078097429s Jan 20 21:35:56.676: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 14.088649423s Jan 20 21:35:58.704: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 16.116638537s Jan 20 21:36:00.718: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 18.130710764s Jan 20 21:36:02.767: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 20.179292975s Jan 20 21:36:04.794: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 22.206485686s Jan 20 21:36:06.802: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 24.214691303s Jan 20 21:36:08.827: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 26.239908018s Jan 20 21:36:10.835: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Running", Reason="", readiness=true. Elapsed: 28.24726768s Jan 20 21:36:12.875: INFO: Pod "pod-subpath-test-downwardapi-qckh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.28793092s STEP: Saw pod success Jan 20 21:36:12.875: INFO: Pod "pod-subpath-test-downwardapi-qckh" satisfied condition "success or failure" Jan 20 21:36:12.880: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-qckh container test-container-subpath-downwardapi-qckh: STEP: delete the pod Jan 20 21:36:12.959: INFO: Waiting for pod pod-subpath-test-downwardapi-qckh to disappear Jan 20 21:36:13.030: INFO: Pod pod-subpath-test-downwardapi-qckh no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-qckh Jan 20 21:36:13.030: INFO: Deleting pod "pod-subpath-test-downwardapi-qckh" in namespace "subpath-4825" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:36:13.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4825" for this suite. • [SLOW TEST:30.615 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":88,"skipped":1418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:36:13.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-eb2e6b4b-75dd-45d9-9eba-9878b820056d STEP: Creating a pod to test consume configMaps Jan 20 21:36:13.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5" in namespace "configmap-6089" to be "success or failure" Jan 20 21:36:13.223: INFO: Pod "pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.665369ms Jan 20 21:36:15.231: INFO: Pod "pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035356636s Jan 20 21:36:17.240: INFO: Pod "pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044389415s Jan 20 21:36:19.249: INFO: Pod "pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053007728s Jan 20 21:36:21.256: INFO: Pod "pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060284207s STEP: Saw pod success Jan 20 21:36:21.256: INFO: Pod "pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5" satisfied condition "success or failure" Jan 20 21:36:21.260: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5 container configmap-volume-test: STEP: delete the pod Jan 20 21:36:21.305: INFO: Waiting for pod pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5 to disappear Jan 20 21:36:21.319: INFO: Pod pod-configmaps-d9c0d8d3-2a4c-4d65-b9bf-2ede29a74bc5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:36:21.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6089" for this suite. • [SLOW TEST:8.292 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:36:21.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:36:22.092: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:36:24.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:36:26.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:36:28.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715152982, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:36:31.140: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:36:41.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1825" for this suite. STEP: Destroying namespace "webhook-1825-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.486 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":90,"skipped":1496,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:36:41.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:36:41.925: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 20 21:36:45.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7565 create -f -' Jan 20 21:36:48.190: INFO: stderr: "" Jan 20 21:36:48.190: INFO: stdout: "e2e-test-crd-publish-openapi-6017-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 20 21:36:48.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7565 delete e2e-test-crd-publish-openapi-6017-crds test-cr' Jan 20 21:36:48.349: INFO: stderr: "" Jan 20 21:36:48.349: INFO: stdout: "e2e-test-crd-publish-openapi-6017-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 20 21:36:48.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7565 apply -f -' Jan 20 21:36:48.714: INFO: stderr: "" Jan 20 21:36:48.714: INFO: stdout: "e2e-test-crd-publish-openapi-6017-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 20 21:36:48.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7565 delete e2e-test-crd-publish-openapi-6017-crds test-cr' Jan 20 21:36:48.896: INFO: stderr: "" Jan 20 21:36:48.896: INFO: stdout: "e2e-test-crd-publish-openapi-6017-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 20 21:36:48.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6017-crds' Jan 20 21:36:49.310: INFO: stderr: "" Jan 20 21:36:49.310: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6017-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:36:51.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7565" for this suite. • [SLOW TEST:9.769 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":91,"skipped":1508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:36:51.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:36:59.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-481" for this suite. • [SLOW TEST:8.284 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1537,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:36:59.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:38:00.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4860" for this suite. • [SLOW TEST:60.168 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1548,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:38:00.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5360 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 21:38:00.176: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 21:38:34.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5360 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:38:34.406: INFO: >>> kubeConfig: /root/.kube/config I0120 21:38:34.477550 9 log.go:172] (0xc004c8e6e0) (0xc001c18dc0) Create stream I0120 21:38:34.477775 9 log.go:172] (0xc004c8e6e0) (0xc001c18dc0) Stream added, broadcasting: 1 I0120 21:38:34.486434 9 log.go:172] (0xc004c8e6e0) Reply frame received for 1 I0120 21:38:34.486654 9 log.go:172] (0xc004c8e6e0) (0xc001c97f40) Create stream I0120 21:38:34.486685 9 log.go:172] (0xc004c8e6e0) (0xc001c97f40) Stream added, broadcasting: 3 I0120 21:38:34.488756 9 log.go:172] (0xc004c8e6e0) Reply frame received for 3 I0120 21:38:34.488835 9 log.go:172] (0xc004c8e6e0) (0xc0024fc0a0) Create stream I0120 21:38:34.488855 9 log.go:172] (0xc004c8e6e0) (0xc0024fc0a0) Stream added, broadcasting: 5 I0120 21:38:34.491406 9 log.go:172] (0xc004c8e6e0) Reply frame received for 5 I0120 21:38:34.613107 9 log.go:172] (0xc004c8e6e0) Data frame received for 3 I0120 21:38:34.613223 9 log.go:172] (0xc001c97f40) (3) Data frame handling I0120 21:38:34.613253 9 log.go:172] (0xc001c97f40) (3) Data frame sent I0120 21:38:34.695453 9 log.go:172] (0xc004c8e6e0) Data frame received for 1 I0120 21:38:34.695621 9 log.go:172] (0xc004c8e6e0) (0xc0024fc0a0) Stream removed, broadcasting: 5 I0120 21:38:34.695860 9 log.go:172] (0xc004c8e6e0) (0xc001c97f40) Stream removed, broadcasting: 3 I0120 21:38:34.696002 9 log.go:172] (0xc001c18dc0) (1) Data frame handling I0120 21:38:34.696055 9 log.go:172] (0xc001c18dc0) (1) Data frame sent I0120 21:38:34.696072 9 log.go:172] (0xc004c8e6e0) (0xc001c18dc0) Stream removed, broadcasting: 1 I0120 21:38:34.696106 9 log.go:172] (0xc004c8e6e0) Go away received I0120 21:38:34.696510 9 log.go:172] (0xc004c8e6e0) (0xc001c18dc0) Stream removed, broadcasting: 1 I0120 21:38:34.696533 9 log.go:172] (0xc004c8e6e0) (0xc001c97f40) Stream removed, broadcasting: 3 I0120 21:38:34.696556 9 log.go:172] (0xc004c8e6e0) (0xc0024fc0a0) Stream removed, broadcasting: 5 Jan 20 21:38:34.696: INFO: Found all expected endpoints: [netserver-0] Jan 20 21:38:34.702: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5360 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:38:34.702: INFO: >>> kubeConfig: /root/.kube/config I0120 21:38:34.750233 9 log.go:172] (0xc0050622c0) (0xc00147c500) Create stream I0120 21:38:34.750528 9 log.go:172] (0xc0050622c0) (0xc00147c500) Stream added, broadcasting: 1 I0120 21:38:34.761409 9 log.go:172] (0xc0050622c0) Reply frame received for 1 I0120 21:38:34.761613 9 log.go:172] (0xc0050622c0) (0xc001d96a00) Create stream I0120 21:38:34.761641 9 log.go:172] (0xc0050622c0) (0xc001d96a00) Stream added, broadcasting: 3 I0120 21:38:34.763447 9 log.go:172] (0xc0050622c0) Reply frame received for 3 I0120 21:38:34.763489 9 log.go:172] (0xc0050622c0) (0xc0024fc500) Create stream I0120 21:38:34.763501 9 log.go:172] (0xc0050622c0) (0xc0024fc500) Stream added, broadcasting: 5 I0120 21:38:34.765137 9 log.go:172] (0xc0050622c0) Reply frame received for 5 I0120 21:38:34.867386 9 log.go:172] (0xc0050622c0) Data frame received for 3 I0120 21:38:34.867470 9 log.go:172] (0xc001d96a00) (3) Data frame handling I0120 21:38:34.867512 9 log.go:172] (0xc001d96a00) (3) Data frame sent I0120 21:38:34.954988 9 log.go:172] (0xc0050622c0) Data frame received for 1 I0120 21:38:34.955250 9 log.go:172] (0xc0050622c0) (0xc001d96a00) Stream removed, broadcasting: 3 I0120 21:38:34.955347 9 log.go:172] (0xc00147c500) (1) Data frame handling I0120 21:38:34.955401 9 log.go:172] (0xc00147c500) (1) Data frame sent I0120 21:38:34.955465 9 log.go:172] (0xc0050622c0) (0xc0024fc500) Stream removed, broadcasting: 5 I0120 21:38:34.955557 9 log.go:172] (0xc0050622c0) (0xc00147c500) Stream removed, broadcasting: 1 I0120 21:38:34.955718 9 log.go:172] (0xc0050622c0) Go away received I0120 21:38:34.956369 9 log.go:172] (0xc0050622c0) (0xc00147c500) Stream removed, broadcasting: 1 I0120 21:38:34.956409 9 log.go:172] (0xc0050622c0) (0xc001d96a00) Stream removed, broadcasting: 3 I0120 21:38:34.956426 9 log.go:172] (0xc0050622c0) (0xc0024fc500) Stream removed, broadcasting: 5 Jan 20 21:38:34.956: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:38:34.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5360" for this suite. • [SLOW TEST:34.924 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1548,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:38:34.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 20 21:38:35.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8307' Jan 20 21:38:35.545: INFO: stderr: "" Jan 20 21:38:35.545: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 20 21:38:36.562: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:36.562: INFO: Found 0 / 1 Jan 20 21:38:38.570: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:38.570: INFO: Found 0 / 1 Jan 20 21:38:39.554: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:39.554: INFO: Found 0 / 1 Jan 20 21:38:40.557: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:40.557: INFO: Found 0 / 1 Jan 20 21:38:42.020: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:42.021: INFO: Found 0 / 1 Jan 20 21:38:42.556: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:42.556: INFO: Found 0 / 1 Jan 20 21:38:44.077: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:44.077: INFO: Found 0 / 1 Jan 20 21:38:44.734: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:44.734: INFO: Found 0 / 1 Jan 20 21:38:45.557: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:45.557: INFO: Found 0 / 1 Jan 20 21:38:46.561: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:46.561: INFO: Found 0 / 1 Jan 20 21:38:47.576: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:47.576: INFO: Found 1 / 1 Jan 20 21:38:47.576: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 20 21:38:47.580: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:47.580: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 20 21:38:47.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-v97r8 --namespace=kubectl-8307 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 20 21:38:47.719: INFO: stderr: "" Jan 20 21:38:47.719: INFO: stdout: "pod/agnhost-master-v97r8 patched\n" STEP: checking annotations Jan 20 21:38:47.724: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:38:47.724: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:38:47.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8307" for this suite. • [SLOW TEST:12.759 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":95,"skipped":1549,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:38:47.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:38:47.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2" in namespace "downward-api-6650" to be "success or failure" Jan 20 21:38:47.886: INFO: Pod "downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.204346ms Jan 20 21:38:49.895: INFO: Pod "downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039071854s Jan 20 21:38:51.938: INFO: Pod "downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082104297s Jan 20 21:38:53.954: INFO: Pod "downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097529391s Jan 20 21:38:55.961: INFO: Pod "downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104745178s STEP: Saw pod success Jan 20 21:38:55.961: INFO: Pod "downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2" satisfied condition "success or failure" Jan 20 21:38:55.966: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2 container client-container: STEP: delete the pod Jan 20 21:38:56.090: INFO: Waiting for pod downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2 to disappear Jan 20 21:38:56.106: INFO: Pod downwardapi-volume-d2cfee40-7804-4cb4-a66b-8d51587fc8c2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:38:56.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6650" for this suite. • [SLOW TEST:8.389 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1554,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:38:56.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:38:56.211: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 20 21:38:58.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9575 create -f -' Jan 20 21:39:01.161: INFO: stderr: "" Jan 20 21:39:01.161: INFO: stdout: "e2e-test-crd-publish-openapi-7498-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 20 21:39:01.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9575 delete e2e-test-crd-publish-openapi-7498-crds test-foo' Jan 20 21:39:01.376: INFO: stderr: "" Jan 20 21:39:01.377: INFO: stdout: "e2e-test-crd-publish-openapi-7498-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 20 21:39:01.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9575 apply -f -' Jan 20 21:39:01.859: INFO: stderr: "" Jan 20 21:39:01.859: INFO: stdout: "e2e-test-crd-publish-openapi-7498-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 20 21:39:01.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9575 delete e2e-test-crd-publish-openapi-7498-crds test-foo' Jan 20 21:39:02.015: INFO: stderr: "" Jan 20 21:39:02.016: INFO: stdout: "e2e-test-crd-publish-openapi-7498-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 20 21:39:02.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9575 create -f -' Jan 20 21:39:02.552: INFO: rc: 1 Jan 20 21:39:02.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9575 apply -f -' Jan 20 21:39:02.903: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 20 21:39:02.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9575 create -f -' Jan 20 21:39:03.309: INFO: rc: 1 Jan 20 21:39:03.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9575 apply -f -' Jan 20 21:39:03.730: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 20 21:39:03.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7498-crds' Jan 20 21:39:04.206: INFO: stderr: "" Jan 20 21:39:04.207: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7498-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 20 21:39:04.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7498-crds.metadata' Jan 20 21:39:04.653: INFO: stderr: "" Jan 20 21:39:04.654: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7498-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 20 21:39:04.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7498-crds.spec' Jan 20 21:39:04.947: INFO: stderr: "" Jan 20 21:39:04.947: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7498-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 20 21:39:04.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7498-crds.spec.bars' Jan 20 21:39:05.404: INFO: stderr: "" Jan 20 21:39:05.405: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7498-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 20 21:39:05.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7498-crds.spec.bars2' Jan 20 21:39:05.891: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:39:08.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9575" for this suite. • [SLOW TEST:12.102 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":97,"skipped":1568,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:39:08.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:39:08.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7565' Jan 20 21:39:08.840: INFO: stderr: "" Jan 20 21:39:08.840: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jan 20 21:39:08.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7565' Jan 20 21:39:09.343: INFO: stderr: "" Jan 20 21:39:09.344: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 20 21:39:10.368: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:39:10.369: INFO: Found 0 / 1 Jan 20 21:39:11.354: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:39:11.354: INFO: Found 0 / 1 Jan 20 21:39:12.354: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:39:12.354: INFO: Found 0 / 1 Jan 20 21:39:13.357: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:39:13.358: INFO: Found 0 / 1 Jan 20 21:39:14.352: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:39:14.352: INFO: Found 0 / 1 Jan 20 21:39:15.372: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:39:15.372: INFO: Found 1 / 1 Jan 20 21:39:15.372: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 20 21:39:15.380: INFO: Selector matched 1 pods for map[app:agnhost] Jan 20 21:39:15.380: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 20 21:39:15.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-47z62 --namespace=kubectl-7565' Jan 20 21:39:15.603: INFO: stderr: "" Jan 20 21:39:15.603: INFO: stdout: "Name: agnhost-master-47z62\nNamespace: kubectl-7565\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Mon, 20 Jan 2020 21:39:08 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: docker://a3458188f359631f0fb3d4d27e9ddbdbc6d8d482aa38a20cd88eefc7a3ef5716\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 20 Jan 2020 21:39:14 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bmwn7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bmwn7:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bmwn7\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-7565/agnhost-master-47z62 to jerma-node\n Normal Pulled 3s kubelet, jerma-node Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-node Created container agnhost-master\n Normal Started 1s kubelet, jerma-node Started container agnhost-master\n" Jan 20 21:39:15.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7565' Jan 20 21:39:15.824: INFO: stderr: "" Jan 20 21:39:15.824: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7565\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-47z62\n" Jan 20 21:39:15.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7565' Jan 20 21:39:15.992: INFO: stderr: "" Jan 20 21:39:15.992: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7565\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.105.176\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 20 21:39:15.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node' Jan 20 21:39:16.159: INFO: stderr: "" Jan 20 21:39:16.159: INFO: stdout: "Name: jerma-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 04 Jan 2020 11:59:52 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: jerma-node\n AcquireTime: \n RenewTime: Mon, 20 Jan 2020 21:39:14 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 04 Jan 2020 12:00:49 +0000 Sat, 04 Jan 2020 12:00:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Mon, 20 Jan 2020 21:37:08 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 20 Jan 2020 21:37:08 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 20 Jan 2020 21:37:08 +0000 Sat, 04 Jan 2020 11:59:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 20 Jan 2020 21:37:08 +0000 Sat, 04 Jan 2020 12:00:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.2.250\n Hostname: jerma-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: bdc16344252549dd902c3a5d68b22f41\n System UUID: BDC16344-2525-49DD-902C-3A5D68B22F41\n Boot ID: eec61fc4-8bf6-487f-8f93-ea9731fe757a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-dsf66 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system weave-net-kz8lv 20m (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n kubectl-7565 agnhost-master-47z62 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 20 21:39:16.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7565' Jan 20 21:39:16.296: INFO: stderr: "" Jan 20 21:39:16.296: INFO: stdout: "Name: kubectl-7565\nLabels: e2e-framework=kubectl\n e2e-run=8abc3cf8-1405-42c2-ac6c-f390de6c22b1\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:39:16.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7565" for this suite. • [SLOW TEST:8.076 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":98,"skipped":1569,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:39:16.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:39:16.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe" in namespace "projected-5423" to be "success or failure" Jan 20 21:39:16.455: INFO: Pod "downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe": Phase="Pending", Reason="", readiness=false. Elapsed: 57.916158ms Jan 20 21:39:18.467: INFO: Pod "downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07056595s Jan 20 21:39:20.480: INFO: Pod "downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08331132s Jan 20 21:39:22.492: INFO: Pod "downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095446163s Jan 20 21:39:24.512: INFO: Pod "downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11516657s Jan 20 21:39:26.524: INFO: Pod "downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12725521s STEP: Saw pod success Jan 20 21:39:26.524: INFO: Pod "downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe" satisfied condition "success or failure" Jan 20 21:39:26.531: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe container client-container: STEP: delete the pod Jan 20 21:39:26.593: INFO: Waiting for pod downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe to disappear Jan 20 21:39:26.605: INFO: Pod downwardapi-volume-a6c9b946-05cd-49dd-9aae-2fde1ced14fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:39:26.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5423" for this suite. • [SLOW TEST:10.319 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:39:26.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Jan 20 21:39:26.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6394' Jan 20 21:39:27.195: INFO: stderr: "" Jan 20 21:39:27.195: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 21:39:27.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6394' Jan 20 21:39:27.395: INFO: stderr: "" Jan 20 21:39:27.395: INFO: stdout: "update-demo-nautilus-dvhgz update-demo-nautilus-kbk77 " Jan 20 21:39:27.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dvhgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:39:27.531: INFO: stderr: "" Jan 20 21:39:27.531: INFO: stdout: "" Jan 20 21:39:27.531: INFO: update-demo-nautilus-dvhgz is created but not running Jan 20 21:39:32.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6394' Jan 20 21:39:33.135: INFO: stderr: "" Jan 20 21:39:33.136: INFO: stdout: "update-demo-nautilus-dvhgz update-demo-nautilus-kbk77 " Jan 20 21:39:33.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dvhgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:39:33.502: INFO: stderr: "" Jan 20 21:39:33.503: INFO: stdout: "" Jan 20 21:39:33.503: INFO: update-demo-nautilus-dvhgz is created but not running Jan 20 21:39:38.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6394' Jan 20 21:39:38.723: INFO: stderr: "" Jan 20 21:39:38.724: INFO: stdout: "update-demo-nautilus-dvhgz update-demo-nautilus-kbk77 " Jan 20 21:39:38.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dvhgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:39:38.842: INFO: stderr: "" Jan 20 21:39:38.842: INFO: stdout: "true" Jan 20 21:39:38.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dvhgz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:39:39.000: INFO: stderr: "" Jan 20 21:39:39.000: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 21:39:39.000: INFO: validating pod update-demo-nautilus-dvhgz Jan 20 21:39:39.006: INFO: got data: { "image": "nautilus.jpg" } Jan 20 21:39:39.006: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 21:39:39.006: INFO: update-demo-nautilus-dvhgz is verified up and running Jan 20 21:39:39.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbk77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:39:39.126: INFO: stderr: "" Jan 20 21:39:39.126: INFO: stdout: "true" Jan 20 21:39:39.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbk77 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:39:39.242: INFO: stderr: "" Jan 20 21:39:39.242: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 21:39:39.242: INFO: validating pod update-demo-nautilus-kbk77 Jan 20 21:39:39.251: INFO: got data: { "image": "nautilus.jpg" } Jan 20 21:39:39.251: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 21:39:39.251: INFO: update-demo-nautilus-kbk77 is verified up and running STEP: rolling-update to new replication controller Jan 20 21:39:39.256: INFO: scanned /root for discovery docs: Jan 20 21:39:39.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6394' Jan 20 21:40:07.621: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 20 21:40:07.621: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 21:40:07.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6394' Jan 20 21:40:07.764: INFO: stderr: "" Jan 20 21:40:07.764: INFO: stdout: "update-demo-kitten-b8rwl update-demo-kitten-xj57l " Jan 20 21:40:07.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b8rwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:40:07.887: INFO: stderr: "" Jan 20 21:40:07.887: INFO: stdout: "true" Jan 20 21:40:07.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b8rwl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:40:08.032: INFO: stderr: "" Jan 20 21:40:08.032: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 20 21:40:08.032: INFO: validating pod update-demo-kitten-b8rwl Jan 20 21:40:08.039: INFO: got data: { "image": "kitten.jpg" } Jan 20 21:40:08.039: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 20 21:40:08.039: INFO: update-demo-kitten-b8rwl is verified up and running Jan 20 21:40:08.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xj57l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:40:08.149: INFO: stderr: "" Jan 20 21:40:08.149: INFO: stdout: "true" Jan 20 21:40:08.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xj57l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6394' Jan 20 21:40:08.274: INFO: stderr: "" Jan 20 21:40:08.274: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 20 21:40:08.275: INFO: validating pod update-demo-kitten-xj57l Jan 20 21:40:08.281: INFO: got data: { "image": "kitten.jpg" } Jan 20 21:40:08.281: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 20 21:40:08.281: INFO: update-demo-kitten-xj57l is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:40:08.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6394" for this suite. • [SLOW TEST:41.663 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":100,"skipped":1616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:40:08.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:40:08.412: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720" in namespace "security-context-test-3269" to be "success or failure" Jan 20 21:40:08.432: INFO: Pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720": Phase="Pending", Reason="", readiness=false. Elapsed: 19.251093ms Jan 20 21:40:10.448: INFO: Pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035254526s Jan 20 21:40:12.455: INFO: Pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042740199s Jan 20 21:40:15.067: INFO: Pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.654151598s Jan 20 21:40:17.143: INFO: Pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720": Phase="Pending", Reason="", readiness=false. Elapsed: 8.730294951s Jan 20 21:40:19.154: INFO: Pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720": Phase="Pending", Reason="", readiness=false. Elapsed: 10.741027316s Jan 20 21:40:21.163: INFO: Pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.750762682s Jan 20 21:40:21.163: INFO: Pod "alpine-nnp-false-d040f78f-8df1-4f51-98c6-114dc6f25720" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:40:21.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3269" for this suite. • [SLOW TEST:12.907 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1657,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:40:21.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 20 21:40:28.519: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:40:28.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6094" for this suite. • [SLOW TEST:7.501 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1674,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:40:28.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 20 21:40:28.874: INFO: Waiting up to 5m0s for pod "pod-1f626a27-4667-4976-9f23-abcc577380ec" in namespace "emptydir-3731" to be "success or failure" Jan 20 21:40:28.882: INFO: Pod "pod-1f626a27-4667-4976-9f23-abcc577380ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027366ms Jan 20 21:40:30.899: INFO: Pod "pod-1f626a27-4667-4976-9f23-abcc577380ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025444812s Jan 20 21:40:32.909: INFO: Pod "pod-1f626a27-4667-4976-9f23-abcc577380ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03519488s Jan 20 21:40:34.922: INFO: Pod "pod-1f626a27-4667-4976-9f23-abcc577380ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047948839s Jan 20 21:40:36.932: INFO: Pod "pod-1f626a27-4667-4976-9f23-abcc577380ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058467192s STEP: Saw pod success Jan 20 21:40:36.933: INFO: Pod "pod-1f626a27-4667-4976-9f23-abcc577380ec" satisfied condition "success or failure" Jan 20 21:40:36.939: INFO: Trying to get logs from node jerma-node pod pod-1f626a27-4667-4976-9f23-abcc577380ec container test-container: STEP: delete the pod Jan 20 21:40:37.073: INFO: Waiting for pod pod-1f626a27-4667-4976-9f23-abcc577380ec to disappear Jan 20 21:40:37.081: INFO: Pod pod-1f626a27-4667-4976-9f23-abcc577380ec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:40:37.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3731" for this suite. • [SLOW TEST:8.390 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:40:37.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-2vfb STEP: Creating a pod to test atomic-volume-subpath Jan 20 21:40:37.250: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2vfb" in namespace "subpath-7419" to be "success or failure" Jan 20 21:40:37.269: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.445429ms Jan 20 21:40:39.280: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03012648s Jan 20 21:40:41.290: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039836908s Jan 20 21:40:43.296: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04623533s Jan 20 21:40:45.304: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 8.05417472s Jan 20 21:40:47.316: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 10.066741802s Jan 20 21:40:49.326: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 12.076128877s Jan 20 21:40:51.337: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 14.086957158s Jan 20 21:40:53.345: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 16.09533039s Jan 20 21:40:55.354: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 18.104028361s Jan 20 21:40:57.364: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 20.114241086s Jan 20 21:40:59.372: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 22.122697705s Jan 20 21:41:01.379: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 24.129713609s Jan 20 21:41:03.388: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Running", Reason="", readiness=true. Elapsed: 26.138473825s Jan 20 21:41:05.400: INFO: Pod "pod-subpath-test-secret-2vfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.149819244s STEP: Saw pod success Jan 20 21:41:05.400: INFO: Pod "pod-subpath-test-secret-2vfb" satisfied condition "success or failure" Jan 20 21:41:05.406: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-2vfb container test-container-subpath-secret-2vfb: STEP: delete the pod Jan 20 21:41:05.480: INFO: Waiting for pod pod-subpath-test-secret-2vfb to disappear Jan 20 21:41:05.486: INFO: Pod pod-subpath-test-secret-2vfb no longer exists STEP: Deleting pod pod-subpath-test-secret-2vfb Jan 20 21:41:05.487: INFO: Deleting pod "pod-subpath-test-secret-2vfb" in namespace "subpath-7419" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:41:05.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7419" for this suite. • [SLOW TEST:28.405 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":104,"skipped":1721,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:41:05.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-pkxv STEP: Creating a pod to test atomic-volume-subpath Jan 20 21:41:05.633: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pkxv" in namespace "subpath-1575" to be "success or failure" Jan 20 21:41:05.729: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Pending", Reason="", readiness=false. Elapsed: 96.150883ms Jan 20 21:41:07.747: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114407409s Jan 20 21:41:09.761: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12774672s Jan 20 21:41:11.772: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139034939s Jan 20 21:41:13.784: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150756332s Jan 20 21:41:15.793: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 10.159939832s Jan 20 21:41:17.802: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 12.168654438s Jan 20 21:41:19.814: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 14.181321889s Jan 20 21:41:21.829: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 16.196234597s Jan 20 21:41:23.838: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 18.205420541s Jan 20 21:41:25.851: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 20.21784825s Jan 20 21:41:27.865: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 22.232105962s Jan 20 21:41:29.874: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 24.241121827s Jan 20 21:41:31.888: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 26.255437862s Jan 20 21:41:33.948: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Running", Reason="", readiness=true. Elapsed: 28.314577596s Jan 20 21:41:35.958: INFO: Pod "pod-subpath-test-projected-pkxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.325174799s STEP: Saw pod success Jan 20 21:41:35.958: INFO: Pod "pod-subpath-test-projected-pkxv" satisfied condition "success or failure" Jan 20 21:41:35.975: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-pkxv container test-container-subpath-projected-pkxv: STEP: delete the pod Jan 20 21:41:36.024: INFO: Waiting for pod pod-subpath-test-projected-pkxv to disappear Jan 20 21:41:36.118: INFO: Pod pod-subpath-test-projected-pkxv no longer exists STEP: Deleting pod pod-subpath-test-projected-pkxv Jan 20 21:41:36.119: INFO: Deleting pod "pod-subpath-test-projected-pkxv" in namespace "subpath-1575" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:41:36.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1575" for this suite. • [SLOW TEST:30.683 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":105,"skipped":1724,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:41:36.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:41:36.715: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b600a210-c8ad-4901-a29a-4106e99ffd0f", Controller:(*bool)(0xc002da334a), BlockOwnerDeletion:(*bool)(0xc002da334b)}} Jan 20 21:41:36.725: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"87623889-4c5f-421d-9260-337db8301af0", Controller:(*bool)(0xc002d4e082), BlockOwnerDeletion:(*bool)(0xc002d4e083)}} Jan 20 21:41:36.743: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"54a3ec5d-81a7-4405-bd10-ca4079ba96da", Controller:(*bool)(0xc002da34fa), BlockOwnerDeletion:(*bool)(0xc002da34fb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:41:41.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4269" for this suite. • [SLOW TEST:5.705 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":106,"skipped":1727,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:41:41.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:41:42.694: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:41:44.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:41:46.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:41:48.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:41:50.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153302, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:41:53.801: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:41:53.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:41:54.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3628" for this suite. STEP: Destroying namespace "webhook-3628-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.994 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":107,"skipped":1742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:41:54.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:41:55.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0" in namespace "downward-api-3996" to be "success or failure" Jan 20 21:41:55.096: INFO: Pod "downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0": Phase="Pending", Reason="", readiness=false. Elapsed: 23.515319ms Jan 20 21:41:57.105: INFO: Pod "downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032464592s Jan 20 21:41:59.113: INFO: Pod "downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040383916s Jan 20 21:42:01.132: INFO: Pod "downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059822984s Jan 20 21:42:03.146: INFO: Pod "downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073913807s Jan 20 21:42:05.157: INFO: Pod "downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084959423s STEP: Saw pod success Jan 20 21:42:05.158: INFO: Pod "downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0" satisfied condition "success or failure" Jan 20 21:42:05.161: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0 container client-container: STEP: delete the pod Jan 20 21:42:05.212: INFO: Waiting for pod downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0 to disappear Jan 20 21:42:05.221: INFO: Pod downwardapi-volume-c19d7745-2131-409c-85f6-1a5dfbef62e0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:42:05.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3996" for this suite. • [SLOW TEST:10.343 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1800,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:42:05.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jan 20 21:42:05.416: INFO: Waiting up to 5m0s for pod "var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9" in namespace "var-expansion-3041" to be "success or failure" Jan 20 21:42:05.452: INFO: Pod "var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.560086ms Jan 20 21:42:07.468: INFO: Pod "var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051712125s Jan 20 21:42:09.490: INFO: Pod "var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07389761s Jan 20 21:42:11.525: INFO: Pod "var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108316354s Jan 20 21:42:13.551: INFO: Pod "var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134429811s STEP: Saw pod success Jan 20 21:42:13.551: INFO: Pod "var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9" satisfied condition "success or failure" Jan 20 21:42:13.555: INFO: Trying to get logs from node jerma-node pod var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9 container dapi-container: STEP: delete the pod Jan 20 21:42:13.679: INFO: Waiting for pod var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9 to disappear Jan 20 21:42:13.686: INFO: Pod var-expansion-7f5d12f4-6f64-4abd-92cd-01e620c24fd9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:42:13.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3041" for this suite. • [SLOW TEST:8.485 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1802,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:42:13.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:42:14.470: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:42:16.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:42:18.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:42:20.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153334, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:42:23.548: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:42:24.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-490" for this suite. STEP: Destroying namespace "webhook-490-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.706 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":110,"skipped":1804,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:42:24.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:42:41.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4182" for this suite. • [SLOW TEST:16.602 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":111,"skipped":1824,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:42:41.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jan 20 21:42:41.145: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:42:41.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5962" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":112,"skipped":1825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:42:41.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6340, will wait for the garbage collector to delete the pods Jan 20 21:42:51.530: INFO: Deleting Job.batch foo took: 11.223448ms Jan 20 21:42:51.831: INFO: Terminating Job.batch foo pods took: 300.762619ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:43:32.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6340" for this suite. • [SLOW TEST:51.142 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":113,"skipped":1857,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:43:32.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 20 21:43:48.678: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 20 21:43:48.686: INFO: Pod pod-with-prestop-exec-hook still exists Jan 20 21:43:50.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 20 21:43:50.697: INFO: Pod pod-with-prestop-exec-hook still exists Jan 20 21:43:52.687: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 20 21:43:52.700: INFO: Pod pod-with-prestop-exec-hook still exists Jan 20 21:43:54.690: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 20 21:43:54.698: INFO: Pod pod-with-prestop-exec-hook still exists Jan 20 21:43:56.687: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 20 21:43:56.698: INFO: Pod pod-with-prestop-exec-hook still exists Jan 20 21:43:58.687: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 20 21:43:58.695: INFO: Pod pod-with-prestop-exec-hook still exists Jan 20 21:44:00.687: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 20 21:44:00.697: INFO: Pod pod-with-prestop-exec-hook still exists Jan 20 21:44:02.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 20 21:44:02.692: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:44:02.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6758" for this suite. • [SLOW TEST:30.278 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1871,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:44:02.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Jan 20 21:44:02.837: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix224729948/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:44:03.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5249" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":115,"skipped":1902,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:44:03.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 20 21:44:03.124: INFO: Waiting up to 5m0s for pod "pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473" in namespace "emptydir-5607" to be "success or failure" Jan 20 21:44:03.128: INFO: Pod "pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207214ms Jan 20 21:44:05.137: INFO: Pod "pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013203974s Jan 20 21:44:07.146: INFO: Pod "pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022204848s Jan 20 21:44:09.158: INFO: Pod "pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03451701s Jan 20 21:44:11.164: INFO: Pod "pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040647322s STEP: Saw pod success Jan 20 21:44:11.164: INFO: Pod "pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473" satisfied condition "success or failure" Jan 20 21:44:11.168: INFO: Trying to get logs from node jerma-node pod pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473 container test-container: STEP: delete the pod Jan 20 21:44:11.389: INFO: Waiting for pod pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473 to disappear Jan 20 21:44:11.397: INFO: Pod pod-28e4ec06-1c9a-4d8f-9ddf-d10f9fa7d473 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:44:11.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5607" for this suite. • [SLOW TEST:8.345 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1904,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:44:11.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:44:16.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1919" for this suite. • [SLOW TEST:5.533 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":117,"skipped":1907,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:44:16.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0120 21:44:27.309154 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 21:44:27.309: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:44:27.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9155" for this suite. • [SLOW TEST:10.376 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":118,"skipped":1935,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:44:27.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 20 21:44:28.008: INFO: Waiting up to 5m0s for pod "pod-77f624fb-e6a2-4e35-a688-6740d69bd475" in namespace "emptydir-2495" to be "success or failure" Jan 20 21:44:28.025: INFO: Pod "pod-77f624fb-e6a2-4e35-a688-6740d69bd475": Phase="Pending", Reason="", readiness=false. Elapsed: 16.059352ms Jan 20 21:44:30.038: INFO: Pod "pod-77f624fb-e6a2-4e35-a688-6740d69bd475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029982151s Jan 20 21:44:32.047: INFO: Pod "pod-77f624fb-e6a2-4e35-a688-6740d69bd475": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038279791s Jan 20 21:44:34.052: INFO: Pod "pod-77f624fb-e6a2-4e35-a688-6740d69bd475": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043689666s Jan 20 21:44:36.079: INFO: Pod "pod-77f624fb-e6a2-4e35-a688-6740d69bd475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070149535s STEP: Saw pod success Jan 20 21:44:36.079: INFO: Pod "pod-77f624fb-e6a2-4e35-a688-6740d69bd475" satisfied condition "success or failure" Jan 20 21:44:36.095: INFO: Trying to get logs from node jerma-node pod pod-77f624fb-e6a2-4e35-a688-6740d69bd475 container test-container: STEP: delete the pod Jan 20 21:44:36.166: INFO: Waiting for pod pod-77f624fb-e6a2-4e35-a688-6740d69bd475 to disappear Jan 20 21:44:36.173: INFO: Pod pod-77f624fb-e6a2-4e35-a688-6740d69bd475 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:44:36.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2495" for this suite. • [SLOW TEST:8.861 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1975,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:44:36.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Jan 20 21:44:36.265: INFO: Waiting up to 5m0s for pod "var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f" in namespace "var-expansion-1505" to be "success or failure" Jan 20 21:44:36.270: INFO: Pod "var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.944924ms Jan 20 21:44:38.277: INFO: Pod "var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012296212s Jan 20 21:44:40.317: INFO: Pod "var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051451384s Jan 20 21:44:42.322: INFO: Pod "var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057365826s Jan 20 21:44:44.359: INFO: Pod "var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093702337s STEP: Saw pod success Jan 20 21:44:44.359: INFO: Pod "var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f" satisfied condition "success or failure" Jan 20 21:44:44.372: INFO: Trying to get logs from node jerma-node pod var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f container dapi-container: STEP: delete the pod Jan 20 21:44:44.584: INFO: Waiting for pod var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f to disappear Jan 20 21:44:44.602: INFO: Pod var-expansion-54dbd812-33fb-4146-b533-fd07012ba74f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:44:44.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1505" for this suite. • [SLOW TEST:8.430 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2020,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:44:44.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:44:53.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2315" for this suite. • [SLOW TEST:8.403 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2034,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:44:53.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:44:53.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 20 21:44:53.194: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T21:44:53Z generation:1 name:name1 resourceVersion:3257191 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9871d2d2-538b-4e7d-93f1-4bd100653668] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 20 21:45:03.205: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T21:45:03Z generation:1 name:name2 resourceVersion:3257223 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d286eb61-9421-4a00-b62f-74cc7163e7eb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 20 21:45:13.259: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T21:44:53Z generation:2 name:name1 resourceVersion:3257247 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9871d2d2-538b-4e7d-93f1-4bd100653668] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 20 21:45:23.269: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T21:45:03Z generation:2 name:name2 resourceVersion:3257269 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d286eb61-9421-4a00-b62f-74cc7163e7eb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 20 21:45:33.287: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T21:44:53Z generation:2 name:name1 resourceVersion:3257294 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9871d2d2-538b-4e7d-93f1-4bd100653668] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 20 21:45:43.305: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T21:45:03Z generation:2 name:name2 resourceVersion:3257323 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d286eb61-9421-4a00-b62f-74cc7163e7eb] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:45:53.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2257" for this suite. • [SLOW TEST:60.893 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":122,"skipped":2036,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:45:53.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-0e6b339a-6b0a-47a6-b326-2e658f2a0e12 STEP: Creating a pod to test consume secrets Jan 20 21:45:54.151: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea" in namespace "projected-521" to be "success or failure" Jan 20 21:45:54.157: INFO: Pod "pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea": Phase="Pending", Reason="", readiness=false. Elapsed: 5.411084ms Jan 20 21:45:56.181: INFO: Pod "pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030118739s Jan 20 21:45:58.191: INFO: Pod "pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039572941s Jan 20 21:46:00.198: INFO: Pod "pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046812513s Jan 20 21:46:02.442: INFO: Pod "pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.290549518s STEP: Saw pod success Jan 20 21:46:02.442: INFO: Pod "pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea" satisfied condition "success or failure" Jan 20 21:46:02.455: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea container projected-secret-volume-test: STEP: delete the pod Jan 20 21:46:02.611: INFO: Waiting for pod pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea to disappear Jan 20 21:46:02.626: INFO: Pod pod-projected-secrets-dcf285ea-f088-48cc-a6a4-fc9d99be05ea no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:46:02.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-521" for this suite. • [SLOW TEST:8.723 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2040,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:46:02.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:46:03.508: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:46:05.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:07.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:09.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153563, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:46:12.647: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:46:12.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5957-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:46:13.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5299" for this suite. STEP: Destroying namespace "webhook-5299-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.443 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":124,"skipped":2052,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:46:14.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:46:14.262: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 20 21:46:19.357: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 20 21:46:25.396: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 20 21:46:25.517: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6057 /apis/apps/v1/namespaces/deployment-6057/deployments/test-cleanup-deployment c5becdf2-895b-478f-a7e3-e251a6645ed2 3257545 1 2020-01-20 21:46:25 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c4e388 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 20 21:46:25.540: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-6057 /apis/apps/v1/namespaces/deployment-6057/replicasets/test-cleanup-deployment-55ffc6b7b6 dd4fde40-86f0-498b-92ba-fd5e149df546 3257547 1 2020-01-20 21:46:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c5becdf2-895b-478f-a7e3-e251a6645ed2 0xc002d4f197 0xc002d4f198}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d4f208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 20 21:46:25.540: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 20 21:46:25.541: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6057 /apis/apps/v1/namespaces/deployment-6057/replicasets/test-cleanup-controller 6b2354cd-1df3-470d-be58-1d99010c2d4c 3257546 1 2020-01-20 21:46:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment c5becdf2-895b-478f-a7e3-e251a6645ed2 0xc002d4f0af 0xc002d4f0c0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d4f128 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 20 21:46:25.629: INFO: Pod "test-cleanup-controller-xpm52" is available: &Pod{ObjectMeta:{test-cleanup-controller-xpm52 test-cleanup-controller- deployment-6057 /api/v1/namespaces/deployment-6057/pods/test-cleanup-controller-xpm52 74007486-2dc2-4a72-9965-e77101cf83a1 3257539 0 2020-01-20 21:46:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 6b2354cd-1df3-470d-be58-1d99010c2d4c 0xc004224747 0xc004224748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thp7g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thp7g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thp7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:46:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:46:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:46:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:46:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-20 21:46:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:46:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6d6dd2d51d14d50eb65f4bfedf1cede89e4727f86e5c602107b5891962efe3a1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:46:25.630: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-k8p56" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-k8p56 test-cleanup-deployment-55ffc6b7b6- deployment-6057 /api/v1/namespaces/deployment-6057/pods/test-cleanup-deployment-55ffc6b7b6-k8p56 4cbbed9e-f974-4277-a6d9-f874b2195a7b 3257551 0 2020-01-20 21:46:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 dd4fde40-86f0-498b-92ba-fd5e149df546 0xc0042248c7 0xc0042248c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-thp7g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-thp7g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-thp7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:46:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:46:25.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6057" for this suite. • [SLOW TEST:11.578 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":125,"skipped":2064,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:46:25.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:46:26.988: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:46:29.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:31.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:33.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:35.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:37.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:39.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153587, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:46:42.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 20 21:46:42.110: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:46:42.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2855" for this suite. STEP: Destroying namespace "webhook-2855-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.613 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":126,"skipped":2066,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:46:42.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 20 21:46:42.371: INFO: Waiting up to 5m0s for pod "pod-cb135d6b-70f3-497d-99c5-5fe7af514260" in namespace "emptydir-5255" to be "success or failure" Jan 20 21:46:42.420: INFO: Pod "pod-cb135d6b-70f3-497d-99c5-5fe7af514260": Phase="Pending", Reason="", readiness=false. Elapsed: 49.193209ms Jan 20 21:46:44.429: INFO: Pod "pod-cb135d6b-70f3-497d-99c5-5fe7af514260": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058113493s Jan 20 21:46:46.438: INFO: Pod "pod-cb135d6b-70f3-497d-99c5-5fe7af514260": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066731764s Jan 20 21:46:48.449: INFO: Pod "pod-cb135d6b-70f3-497d-99c5-5fe7af514260": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077488706s Jan 20 21:46:50.459: INFO: Pod "pod-cb135d6b-70f3-497d-99c5-5fe7af514260": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088201395s Jan 20 21:46:52.472: INFO: Pod "pod-cb135d6b-70f3-497d-99c5-5fe7af514260": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100656663s STEP: Saw pod success Jan 20 21:46:52.472: INFO: Pod "pod-cb135d6b-70f3-497d-99c5-5fe7af514260" satisfied condition "success or failure" Jan 20 21:46:52.476: INFO: Trying to get logs from node jerma-node pod pod-cb135d6b-70f3-497d-99c5-5fe7af514260 container test-container: STEP: delete the pod Jan 20 21:46:52.756: INFO: Waiting for pod pod-cb135d6b-70f3-497d-99c5-5fe7af514260 to disappear Jan 20 21:46:52.796: INFO: Pod pod-cb135d6b-70f3-497d-99c5-5fe7af514260 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:46:52.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5255" for this suite. • [SLOW TEST:10.583 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2071,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:46:52.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:46:53.743: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:46:55.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:57.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:46:59.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153613, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:47:02.841: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:47:02.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9315-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:47:04.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6868" for this suite. STEP: Destroying namespace "webhook-6868-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.615 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":128,"skipped":2072,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:47:04.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:47:05.389: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:47:07.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:47:09.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:47:11.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:47:13.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153625, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:47:16.614: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:47:16.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9354" for this suite. STEP: Destroying namespace "webhook-9354-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.657 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":129,"skipped":2083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:47:17.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0120 21:47:47.752374 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 21:47:47.752: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:47:47.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8610" for this suite. • [SLOW TEST:30.625 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":130,"skipped":2107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:47:47.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 21:47:48.901: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 21:47:50.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:47:52.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:47:55.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:47:57.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:47:58.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715153668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:48:01.972: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:48:02.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2815" for this suite. STEP: Destroying namespace "webhook-2815-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.466 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":131,"skipped":2140,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:48:02.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:48:23.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9307" for this suite. STEP: Destroying namespace "nsdeletetest-3710" for this suite. Jan 20 21:48:23.528: INFO: Namespace nsdeletetest-3710 was already deleted STEP: Destroying namespace "nsdeletetest-7850" for this suite. • [SLOW TEST:21.299 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":132,"skipped":2162,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:48:23.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 20 21:48:23.717: INFO: Created pod &Pod{ObjectMeta:{dns-1937 dns-1937 /api/v1/namespaces/dns-1937/pods/dns-1937 8d10eb33-090c-4b99-b8ef-a97ee205d363 3258238 0 2020-01-20 21:48:23 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hfjqv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hfjqv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hfjqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jan 20 21:48:33.737: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1937 PodName:dns-1937 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:48:33.737: INFO: >>> kubeConfig: /root/.kube/config I0120 21:48:33.814990 9 log.go:172] (0xc002962b00) (0xc001d977c0) Create stream I0120 21:48:33.815300 9 log.go:172] (0xc002962b00) (0xc001d977c0) Stream added, broadcasting: 1 I0120 21:48:33.826941 9 log.go:172] (0xc002962b00) Reply frame received for 1 I0120 21:48:33.827208 9 log.go:172] (0xc002962b00) (0xc001c975e0) Create stream I0120 21:48:33.827269 9 log.go:172] (0xc002962b00) (0xc001c975e0) Stream added, broadcasting: 3 I0120 21:48:33.830525 9 log.go:172] (0xc002962b00) Reply frame received for 3 I0120 21:48:33.830616 9 log.go:172] (0xc002962b00) (0xc001e1f9a0) Create stream I0120 21:48:33.830639 9 log.go:172] (0xc002962b00) (0xc001e1f9a0) Stream added, broadcasting: 5 I0120 21:48:33.835814 9 log.go:172] (0xc002962b00) Reply frame received for 5 I0120 21:48:33.989562 9 log.go:172] (0xc002962b00) Data frame received for 3 I0120 21:48:33.989872 9 log.go:172] (0xc001c975e0) (3) Data frame handling I0120 21:48:33.989955 9 log.go:172] (0xc001c975e0) (3) Data frame sent I0120 21:48:34.072388 9 log.go:172] (0xc002962b00) (0xc001c975e0) Stream removed, broadcasting: 3 I0120 21:48:34.072674 9 log.go:172] (0xc002962b00) Data frame received for 1 I0120 21:48:34.072711 9 log.go:172] (0xc001d977c0) (1) Data frame handling I0120 21:48:34.072871 9 log.go:172] (0xc001d977c0) (1) Data frame sent I0120 21:48:34.072895 9 log.go:172] (0xc002962b00) (0xc001e1f9a0) Stream removed, broadcasting: 5 I0120 21:48:34.072949 9 log.go:172] (0xc002962b00) (0xc001d977c0) Stream removed, broadcasting: 1 I0120 21:48:34.072977 9 log.go:172] (0xc002962b00) Go away received I0120 21:48:34.073880 9 log.go:172] (0xc002962b00) (0xc001d977c0) Stream removed, broadcasting: 1 I0120 21:48:34.073917 9 log.go:172] (0xc002962b00) (0xc001c975e0) Stream removed, broadcasting: 3 I0120 21:48:34.073933 9 log.go:172] (0xc002962b00) (0xc001e1f9a0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jan 20 21:48:34.074: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1937 PodName:dns-1937 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 21:48:34.074: INFO: >>> kubeConfig: /root/.kube/config I0120 21:48:34.114313 9 log.go:172] (0xc002bbf080) (0xc00202bea0) Create stream I0120 21:48:34.114441 9 log.go:172] (0xc002bbf080) (0xc00202bea0) Stream added, broadcasting: 1 I0120 21:48:34.121911 9 log.go:172] (0xc002bbf080) Reply frame received for 1 I0120 21:48:34.122095 9 log.go:172] (0xc002bbf080) (0xc00202bf40) Create stream I0120 21:48:34.122107 9 log.go:172] (0xc002bbf080) (0xc00202bf40) Stream added, broadcasting: 3 I0120 21:48:34.127054 9 log.go:172] (0xc002bbf080) Reply frame received for 3 I0120 21:48:34.127120 9 log.go:172] (0xc002bbf080) (0xc001c977c0) Create stream I0120 21:48:34.127144 9 log.go:172] (0xc002bbf080) (0xc001c977c0) Stream added, broadcasting: 5 I0120 21:48:34.129368 9 log.go:172] (0xc002bbf080) Reply frame received for 5 I0120 21:48:34.232648 9 log.go:172] (0xc002bbf080) Data frame received for 3 I0120 21:48:34.232945 9 log.go:172] (0xc00202bf40) (3) Data frame handling I0120 21:48:34.232985 9 log.go:172] (0xc00202bf40) (3) Data frame sent I0120 21:48:34.324117 9 log.go:172] (0xc002bbf080) Data frame received for 1 I0120 21:48:34.324213 9 log.go:172] (0xc002bbf080) (0xc00202bf40) Stream removed, broadcasting: 3 I0120 21:48:34.324294 9 log.go:172] (0xc00202bea0) (1) Data frame handling I0120 21:48:34.324364 9 log.go:172] (0xc002bbf080) (0xc001c977c0) Stream removed, broadcasting: 5 I0120 21:48:34.324392 9 log.go:172] (0xc00202bea0) (1) Data frame sent I0120 21:48:34.324409 9 log.go:172] (0xc002bbf080) (0xc00202bea0) Stream removed, broadcasting: 1 I0120 21:48:34.324446 9 log.go:172] (0xc002bbf080) Go away received I0120 21:48:34.324687 9 log.go:172] (0xc002bbf080) (0xc00202bea0) Stream removed, broadcasting: 1 I0120 21:48:34.324706 9 log.go:172] (0xc002bbf080) (0xc00202bf40) Stream removed, broadcasting: 3 I0120 21:48:34.324753 9 log.go:172] (0xc002bbf080) (0xc001c977c0) Stream removed, broadcasting: 5 Jan 20 21:48:34.324: INFO: Deleting pod dns-1937... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:48:34.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1937" for this suite. • [SLOW TEST:10.907 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":133,"skipped":2169,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:48:34.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:48:51.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7708" for this suite. • [SLOW TEST:17.248 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":134,"skipped":2179,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:48:51.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 20 21:48:51.773: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 20 21:48:51.813: INFO: Waiting for terminating namespaces to be deleted... Jan 20 21:48:51.818: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 20 21:48:51.849: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 20 21:48:51.850: INFO: Container weave ready: true, restart count 1 Jan 20 21:48:51.850: INFO: Container weave-npc ready: true, restart count 0 Jan 20 21:48:51.850: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 20 21:48:51.850: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 21:48:51.850: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 20 21:48:51.878: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 21:48:51.878: INFO: Container kube-apiserver ready: true, restart count 1 Jan 20 21:48:51.878: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 21:48:51.878: INFO: Container etcd ready: true, restart count 1 Jan 20 21:48:51.878: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 21:48:51.878: INFO: Container coredns ready: true, restart count 0 Jan 20 21:48:51.878: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 21:48:51.878: INFO: Container coredns ready: true, restart count 0 Jan 20 21:48:51.878: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 21:48:51.878: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 20 21:48:51.878: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 20 21:48:51.878: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 21:48:51.878: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 20 21:48:51.878: INFO: Container weave ready: true, restart count 0 Jan 20 21:48:51.878: INFO: Container weave-npc ready: true, restart count 0 Jan 20 21:48:51.878: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 21:48:51.878: INFO: Container kube-scheduler ready: true, restart count 3 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-88488930-b5d5-40c5-8748-f7460575f868 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-88488930-b5d5-40c5-8748-f7460575f868 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-88488930-b5d5-40c5-8748-f7460575f868 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:54:10.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2630" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:318.656 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":135,"skipped":2185,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:54:10.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6736 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6736 STEP: creating replication controller externalsvc in namespace services-6736 I0120 21:54:10.664616 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6736, replica count: 2 I0120 21:54:13.716209 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 21:54:16.716961 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 21:54:19.718219 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 21:54:22.719684 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 20 21:54:22.811: INFO: Creating new exec pod Jan 20 21:54:30.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6736 execpodmjgk6 -- /bin/sh -x -c nslookup nodeport-service' Jan 20 21:54:33.563: INFO: stderr: "I0120 21:54:33.334028 2016 log.go:172] (0xc000106bb0) (0xc0007e7c20) Create stream\nI0120 21:54:33.334210 2016 log.go:172] (0xc000106bb0) (0xc0007e7c20) Stream added, broadcasting: 1\nI0120 21:54:33.340275 2016 log.go:172] (0xc000106bb0) Reply frame received for 1\nI0120 21:54:33.340347 2016 log.go:172] (0xc000106bb0) (0xc0007e7e00) Create stream\nI0120 21:54:33.340360 2016 log.go:172] (0xc000106bb0) (0xc0007e7e00) Stream added, broadcasting: 3\nI0120 21:54:33.342047 2016 log.go:172] (0xc000106bb0) Reply frame received for 3\nI0120 21:54:33.342083 2016 log.go:172] (0xc000106bb0) (0xc0008d00a0) Create stream\nI0120 21:54:33.342094 2016 log.go:172] (0xc000106bb0) (0xc0008d00a0) Stream added, broadcasting: 5\nI0120 21:54:33.343925 2016 log.go:172] (0xc000106bb0) Reply frame received for 5\nI0120 21:54:33.442934 2016 log.go:172] (0xc000106bb0) Data frame received for 5\nI0120 21:54:33.443429 2016 log.go:172] (0xc0008d00a0) (5) Data frame handling\nI0120 21:54:33.443515 2016 log.go:172] (0xc0008d00a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0120 21:54:33.456649 2016 log.go:172] (0xc000106bb0) Data frame received for 3\nI0120 21:54:33.456747 2016 log.go:172] (0xc0007e7e00) (3) Data frame handling\nI0120 21:54:33.456803 2016 log.go:172] (0xc0007e7e00) (3) Data frame sent\nI0120 21:54:33.459607 2016 log.go:172] (0xc000106bb0) Data frame received for 3\nI0120 21:54:33.459662 2016 log.go:172] (0xc0007e7e00) (3) Data frame handling\nI0120 21:54:33.459680 2016 log.go:172] (0xc0007e7e00) (3) Data frame sent\nI0120 21:54:33.546379 2016 log.go:172] (0xc000106bb0) (0xc0007e7e00) Stream removed, broadcasting: 3\nI0120 21:54:33.546500 2016 log.go:172] (0xc000106bb0) Data frame received for 1\nI0120 21:54:33.546528 2016 log.go:172] (0xc0007e7c20) (1) Data frame handling\nI0120 21:54:33.546614 2016 log.go:172] (0xc0007e7c20) (1) Data frame sent\nI0120 21:54:33.546821 2016 log.go:172] (0xc000106bb0) (0xc0007e7c20) Stream removed, broadcasting: 1\nI0120 21:54:33.547213 2016 log.go:172] (0xc000106bb0) (0xc0008d00a0) Stream removed, broadcasting: 5\nI0120 21:54:33.547531 2016 log.go:172] (0xc000106bb0) Go away received\nI0120 21:54:33.548933 2016 log.go:172] (0xc000106bb0) (0xc0007e7c20) Stream removed, broadcasting: 1\nI0120 21:54:33.548982 2016 log.go:172] (0xc000106bb0) (0xc0007e7e00) Stream removed, broadcasting: 3\nI0120 21:54:33.548991 2016 log.go:172] (0xc000106bb0) (0xc0008d00a0) Stream removed, broadcasting: 5\n" Jan 20 21:54:33.563: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6736.svc.cluster.local\tcanonical name = externalsvc.services-6736.svc.cluster.local.\nName:\texternalsvc.services-6736.svc.cluster.local\nAddress: 10.96.103.188\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6736, will wait for the garbage collector to delete the pods Jan 20 21:54:33.638: INFO: Deleting ReplicationController externalsvc took: 9.404837ms Jan 20 21:54:34.040: INFO: Terminating ReplicationController externalsvc pods took: 401.326241ms Jan 20 21:54:52.502: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:54:52.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6736" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:42.212 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":136,"skipped":2203,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:54:52.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:55:05.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-412" for this suite. • [SLOW TEST:13.385 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":137,"skipped":2209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:55:05.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:55:06.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862" in namespace "projected-377" to be "success or failure" Jan 20 21:55:06.173: INFO: Pod "downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862": Phase="Pending", Reason="", readiness=false. Elapsed: 19.239978ms Jan 20 21:55:08.183: INFO: Pod "downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029521059s Jan 20 21:55:10.482: INFO: Pod "downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328955577s Jan 20 21:55:12.492: INFO: Pod "downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338518328s Jan 20 21:55:14.509: INFO: Pod "downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.355885744s STEP: Saw pod success Jan 20 21:55:14.510: INFO: Pod "downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862" satisfied condition "success or failure" Jan 20 21:55:14.518: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862 container client-container: STEP: delete the pod Jan 20 21:55:14.602: INFO: Waiting for pod downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862 to disappear Jan 20 21:55:14.644: INFO: Pod downwardapi-volume-c1eafc13-7019-4bd9-970e-c0d8b25cb862 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:55:14.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-377" for this suite. • [SLOW TEST:8.876 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2264,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:55:14.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 20 21:55:15.142: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 20 21:55:15.155: INFO: Waiting for terminating namespaces to be deleted... Jan 20 21:55:15.157: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 20 21:55:15.164: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 20 21:55:15.164: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 21:55:15.164: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 20 21:55:15.164: INFO: Container weave ready: true, restart count 1 Jan 20 21:55:15.164: INFO: Container weave-npc ready: true, restart count 0 Jan 20 21:55:15.164: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 20 21:55:15.182: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 21:55:15.182: INFO: Container kube-apiserver ready: true, restart count 1 Jan 20 21:55:15.182: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 21:55:15.182: INFO: Container etcd ready: true, restart count 1 Jan 20 21:55:15.182: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 21:55:15.182: INFO: Container coredns ready: true, restart count 0 Jan 20 21:55:15.182: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 21:55:15.182: INFO: Container coredns ready: true, restart count 0 Jan 20 21:55:15.182: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 21:55:15.182: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 20 21:55:15.182: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 20 21:55:15.182: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 21:55:15.182: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 20 21:55:15.182: INFO: Container weave ready: true, restart count 0 Jan 20 21:55:15.182: INFO: Container weave-npc ready: true, restart count 0 Jan 20 21:55:15.182: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 21:55:15.182: INFO: Container kube-scheduler ready: true, restart count 3 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fad977d9-8a1e-4821-9ebb-1b5639696511 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-fad977d9-8a1e-4821-9ebb-1b5639696511 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-fad977d9-8a1e-4821-9ebb-1b5639696511 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:55:33.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1039" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.662 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":139,"skipped":2280,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:55:33.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0120 21:55:46.794607 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 21:55:46.794: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:55:46.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1775" for this suite. • [SLOW TEST:13.756 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":140,"skipped":2287,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:55:47.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-a015f069-603e-4eda-92fd-f6c651b94c07 STEP: Creating secret with name s-test-opt-upd-df53cacd-ddaa-424f-b4ce-68a41a4985a4 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a015f069-603e-4eda-92fd-f6c651b94c07 STEP: Updating secret s-test-opt-upd-df53cacd-ddaa-424f-b4ce-68a41a4985a4 STEP: Creating secret with name s-test-opt-create-80330224-8799-440a-9abf-904d6dce9871 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:56:20.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7929" for this suite. • [SLOW TEST:33.092 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2292,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:56:20.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Jan 20 21:56:20.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 20 21:56:20.766: INFO: stderr: "" Jan 20 21:56:20.766: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:56:20.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2532" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":142,"skipped":2300,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:56:20.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-696d1589-18dc-4cee-a36d-b8c03c457842 STEP: Creating a pod to test consume configMaps Jan 20 21:56:20.908: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce" in namespace "projected-2103" to be "success or failure" Jan 20 21:56:20.922: INFO: Pod "pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 13.620547ms Jan 20 21:56:22.935: INFO: Pod "pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026950725s Jan 20 21:56:24.951: INFO: Pod "pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04308746s Jan 20 21:56:27.145: INFO: Pod "pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236322835s Jan 20 21:56:30.334: INFO: Pod "pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.425935006s STEP: Saw pod success Jan 20 21:56:30.335: INFO: Pod "pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce" satisfied condition "success or failure" Jan 20 21:56:30.383: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce container projected-configmap-volume-test: STEP: delete the pod Jan 20 21:56:31.207: INFO: Waiting for pod pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce to disappear Jan 20 21:56:31.226: INFO: Pod pod-projected-configmaps-4f673cc8-72ca-4c02-a122-8cd52c7fa9ce no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:56:31.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2103" for this suite. • [SLOW TEST:10.453 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2315,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:56:31.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:56:31.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20" in namespace "downward-api-8744" to be "success or failure" Jan 20 21:56:31.507: INFO: Pod "downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20": Phase="Pending", Reason="", readiness=false. Elapsed: 17.674372ms Jan 20 21:56:33.538: INFO: Pod "downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047875657s Jan 20 21:56:35.552: INFO: Pod "downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06192324s Jan 20 21:56:37.595: INFO: Pod "downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104760727s Jan 20 21:56:39.604: INFO: Pod "downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11408092s Jan 20 21:56:41.613: INFO: Pod "downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123019363s STEP: Saw pod success Jan 20 21:56:41.613: INFO: Pod "downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20" satisfied condition "success or failure" Jan 20 21:56:41.618: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20 container client-container: STEP: delete the pod Jan 20 21:56:41.754: INFO: Waiting for pod downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20 to disappear Jan 20 21:56:41.762: INFO: Pod downwardapi-volume-dcdbde7f-0105-4849-ae8b-f2cc4e4a6c20 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:56:41.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8744" for this suite. • [SLOW TEST:10.528 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:56:41.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 21:56:41.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0" in namespace "downward-api-7866" to be "success or failure" Jan 20 21:56:41.981: INFO: Pod "downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.810732ms Jan 20 21:56:43.994: INFO: Pod "downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029636826s Jan 20 21:56:46.037: INFO: Pod "downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072983559s Jan 20 21:56:48.041: INFO: Pod "downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077511921s Jan 20 21:56:50.051: INFO: Pod "downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086752571s STEP: Saw pod success Jan 20 21:56:50.051: INFO: Pod "downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0" satisfied condition "success or failure" Jan 20 21:56:50.055: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0 container client-container: STEP: delete the pod Jan 20 21:56:50.137: INFO: Waiting for pod downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0 to disappear Jan 20 21:56:50.208: INFO: Pod downwardapi-volume-55a63d57-ed18-4f6b-86f2-42933c2784e0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:56:50.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7866" for this suite. • [SLOW TEST:8.446 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2346,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:56:50.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jan 20 21:56:50.296: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:57:11.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6163" for this suite. • [SLOW TEST:21.368 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":146,"skipped":2351,"failed":0} [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:57:11.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 20 21:57:11.672: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6087 /api/v1/namespaces/watch-6087/configmaps/e2e-watch-test-watch-closed 54193bf7-44c7-4582-a953-da06408be9eb 3259969 0 2020-01-20 21:57:11 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 20 21:57:11.674: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6087 /api/v1/namespaces/watch-6087/configmaps/e2e-watch-test-watch-closed 54193bf7-44c7-4582-a953-da06408be9eb 3259970 0 2020-01-20 21:57:11 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 20 21:57:11.727: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6087 /api/v1/namespaces/watch-6087/configmaps/e2e-watch-test-watch-closed 54193bf7-44c7-4582-a953-da06408be9eb 3259971 0 2020-01-20 21:57:11 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 20 21:57:11.727: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6087 /api/v1/namespaces/watch-6087/configmaps/e2e-watch-test-watch-closed 54193bf7-44c7-4582-a953-da06408be9eb 3259972 0 2020-01-20 21:57:11 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:57:11.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6087" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":147,"skipped":2351,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:57:11.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:57:11.811: INFO: Creating ReplicaSet my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c Jan 20 21:57:11.842: INFO: Pod name my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c: Found 0 pods out of 1 Jan 20 21:57:16.848: INFO: Pod name my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c: Found 1 pods out of 1 Jan 20 21:57:16.849: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c" is running Jan 20 21:57:18.862: INFO: Pod "my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c-k26jq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 21:57:11 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 21:57:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 21:57:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 21:57:11 +0000 UTC Reason: Message:}]) Jan 20 21:57:18.863: INFO: Trying to dial the pod Jan 20 21:57:23.906: INFO: Controller my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c: Got expected result from replica 1 [my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c-k26jq]: "my-hostname-basic-8e98e552-5a54-47cf-a755-c729b03cb82c-k26jq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:57:23.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4201" for this suite. • [SLOW TEST:12.184 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":148,"skipped":2370,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:57:23.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 20 21:57:32.687: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9b484b52-2c00-42d6-9097-e2715712f8fe" Jan 20 21:57:32.688: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9b484b52-2c00-42d6-9097-e2715712f8fe" in namespace "pods-830" to be "terminated due to deadline exceeded" Jan 20 21:57:32.703: INFO: Pod "pod-update-activedeadlineseconds-9b484b52-2c00-42d6-9097-e2715712f8fe": Phase="Running", Reason="", readiness=true. Elapsed: 14.733027ms Jan 20 21:57:34.720: INFO: Pod "pod-update-activedeadlineseconds-9b484b52-2c00-42d6-9097-e2715712f8fe": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.03217733s Jan 20 21:57:34.720: INFO: Pod "pod-update-activedeadlineseconds-9b484b52-2c00-42d6-9097-e2715712f8fe" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:57:34.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-830" for this suite. • [SLOW TEST:10.812 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2380,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:57:34.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 20 21:57:35.597: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created Jan 20 21:57:37.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:57:39.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:57:41.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 21:57:43.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154255, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 21:57:46.746: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:57:46.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:57:48.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1245" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.551 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":150,"skipped":2392,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:57:48.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 20 21:58:06.451: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 21:58:06.458: INFO: Pod pod-with-prestop-http-hook still exists Jan 20 21:58:08.459: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 21:58:09.061: INFO: Pod pod-with-prestop-http-hook still exists Jan 20 21:58:10.459: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 21:58:10.470: INFO: Pod pod-with-prestop-http-hook still exists Jan 20 21:58:12.459: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 21:58:12.470: INFO: Pod pod-with-prestop-http-hook still exists Jan 20 21:58:14.462: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 21:58:14.475: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:58:14.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9157" for this suite. • [SLOW TEST:26.218 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2411,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:58:14.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 21:58:14.680: INFO: Creating deployment "webserver-deployment" Jan 20 21:58:14.698: INFO: Waiting for observed generation 1 Jan 20 21:58:16.917: INFO: Waiting for all required pods to come up Jan 20 21:58:17.842: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 20 21:58:40.665: INFO: Waiting for deployment "webserver-deployment" to complete Jan 20 21:58:40.695: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 20 21:58:40.703: INFO: Updating deployment webserver-deployment Jan 20 21:58:40.703: INFO: Waiting for observed generation 2 Jan 20 21:58:44.884: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 20 21:58:45.600: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 20 21:58:45.660: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 20 21:58:47.453: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 20 21:58:47.453: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 20 21:58:47.522: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 20 21:58:47.804: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 20 21:58:47.805: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 20 21:58:47.836: INFO: Updating deployment webserver-deployment Jan 20 21:58:47.836: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 20 21:58:49.371: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 20 21:58:52.310: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 20 21:58:55.007: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6124 /apis/apps/v1/namespaces/deployment-6124/deployments/webserver-deployment 72433e83-42ef-4ded-a125-2c3925e8168a 3260598 3 2020-01-20 21:58:14 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cb3828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-20 21:58:49 +0000 UTC,LastTransitionTime:2020-01-20 21:58:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-20 21:58:52 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 20 21:58:57.509: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6124 /apis/apps/v1/namespaces/deployment-6124/replicasets/webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 3260593 3 2020-01-20 21:58:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 72433e83-42ef-4ded-a125-2c3925e8168a 0xc002d0b027 0xc002d0b028}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d0b098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 20 21:58:57.509: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 20 21:58:57.509: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6124 /apis/apps/v1/namespaces/deployment-6124/replicasets/webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 3260594 3 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 72433e83-42ef-4ded-a125-2c3925e8168a 0xc002d0af57 0xc002d0af58}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d0afb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 20 21:58:59.248: INFO: Pod "webserver-deployment-595b5b9587-4bmb7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4bmb7 webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-4bmb7 51e89629-a7cf-4998-8760-e0f7830c18a1 3260585 0 2020-01-20 21:58:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002d0b5b7 0xc002d0b5b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.249: INFO: Pod "webserver-deployment-595b5b9587-64cg2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-64cg2 webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-64cg2 191dd273-4a0a-4979-9dd8-8949ede5f9bb 3260556 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002d0b6d7 0xc002d0b6d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.250: INFO: Pod "webserver-deployment-595b5b9587-9pfk4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9pfk4 webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-9pfk4 14185879-655b-41a5-917e-6d8070a15b25 3260432 0 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002d0b7f7 0xc002d0b7f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-20 21:58:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:58:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://21380c171de80cbf053e1830c486d6d0c54a7530c21812879060e9560c6edc51,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.250: INFO: Pod "webserver-deployment-595b5b9587-bddkb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bddkb webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-bddkb 27dea6cf-db45-4e64-9bb9-7adc443ded4e 3260434 0 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002d0b980 0xc002d0b981}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-20 21:58:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:58:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6cf65467d99c15a21e375fee9d67bdd8085d414298a76893d16f55ca9f853954,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.251: INFO: Pod "webserver-deployment-595b5b9587-cfw2n" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cfw2n webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-cfw2n 54fb5d5a-dbe9-4d6f-891b-09492fe66b48 3260584 0 2020-01-20 21:58:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002d0baf0 0xc002d0baf1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.252: INFO: Pod "webserver-deployment-595b5b9587-ck5v4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ck5v4 webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-ck5v4 0a1db044-1f8b-4469-a427-595c3a46bdd4 3260609 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002d0bbf7 0xc002d0bbf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-20 21:58:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.252: INFO: Pod "webserver-deployment-595b5b9587-cnwrc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cnwrc webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-cnwrc 83f4192e-7400-4fbc-9464-03b09d93a229 3260611 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002d0bd47 0xc002d0bd48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-20 21:58:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.253: INFO: Pod "webserver-deployment-595b5b9587-fjhwp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fjhwp webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-fjhwp c958ed55-1ae5-4daf-a65c-d73a227c2041 3260446 0 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002d0bea7 0xc002d0bea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-20 21:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:58:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7de5d73319f0772d509eeb4ddb2ab57b63cd04e70b6ef88c4f7278d2abac3248,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.253: INFO: Pod "webserver-deployment-595b5b9587-gbbzn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gbbzn webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-gbbzn 1a83d484-845f-4963-a614-c4b08d89c21b 3260580 0 2020-01-20 21:58:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002db2150 0xc002db2151}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.254: INFO: Pod "webserver-deployment-595b5b9587-j7vsc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j7vsc webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-j7vsc 8001a125-7a18-42e2-8a65-10f7d6db336c 3260427 0 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002db2807 0xc002db2808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-20 21:58:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:58:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7bf00a7682342b178ce82ebbf3586b1f0c2ebdb2046d1177d3952233998a58a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.255: INFO: Pod "webserver-deployment-595b5b9587-kcqjw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kcqjw webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-kcqjw ca7448b0-9c3c-44aa-9893-ab2471939ee9 3260599 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002db3490 0xc002db3491}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-20 21:58:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.255: INFO: Pod "webserver-deployment-595b5b9587-kfs7d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kfs7d webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-kfs7d ac9f747a-9662-4a92-824b-180d179f3d4f 3260603 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002db3c77 0xc002db3c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-20 21:58:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.255: INFO: Pod "webserver-deployment-595b5b9587-lqff9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lqff9 webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-lqff9 c8d0a455-5c9e-4545-a0a9-4edc20058d1d 3260554 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002c18117 0xc002c18118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.256: INFO: Pod "webserver-deployment-595b5b9587-lxdm2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lxdm2 webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-lxdm2 fc75e625-51dd-49c5-9c2a-416cc4026115 3260441 0 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002c18227 0xc002c18228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-20 21:58:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:58:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://fc08feb9dc07244c9be797ad6ac12a02e9666b99848760d8d0f635d03fbd1019,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.256: INFO: Pod "webserver-deployment-595b5b9587-mt28t" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mt28t webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-mt28t 77217394-cead-49b2-b215-c32feda32861 3260409 0 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002c18390 0xc002c18391}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-20 21:58:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:58:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://caaa6862737f63127681b5d9b6d8e9f9523ce81823b11a43115ac74fb79c72c0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.257: INFO: Pod "webserver-deployment-595b5b9587-mx2ss" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mx2ss webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-mx2ss e94e2ef1-b88d-441c-a120-62ca8e555065 3260424 0 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002c18520 0xc002c18521}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-01-20 21:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:58:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://cec112e8686844ccd26046349711c890d7f87f73cc3dae210aa112eb59d2918e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.258: INFO: Pod "webserver-deployment-595b5b9587-pcldb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pcldb webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-pcldb b57aa9f9-027f-4899-8c8e-4287f1738a73 3260444 0 2020-01-20 21:58:14 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002c18690 0xc002c18691}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-20 21:58:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 21:58:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0b3a4207dca1cb0f24cf6d03d27611697524a4081ec048a128713349b7070121,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.258: INFO: Pod "webserver-deployment-595b5b9587-qnkcj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qnkcj webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-qnkcj 91a10611-cb97-400b-9615-cd38f6b215b5 3260559 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002c187f0 0xc002c187f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.258: INFO: Pod "webserver-deployment-595b5b9587-t6zm2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t6zm2 webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-t6zm2 2fe70824-55d6-449e-a8b7-4fb8cab13dcb 3260582 0 2020-01-20 21:58:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002c18907 0xc002c18908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.259: INFO: Pod "webserver-deployment-595b5b9587-vnkd5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vnkd5 webserver-deployment-595b5b9587- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-595b5b9587-vnkd5 75858af2-9b30-4bd3-aaca-e17335ddd75d 3260583 0 2020-01-20 21:58:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 8bfe20b8-30a5-49f6-a95c-44a02e2bd386 0xc002c18a17 0xc002c18a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.260: INFO: Pod "webserver-deployment-c7997dcc8-4gfs7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4gfs7 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-4gfs7 bb56acbf-d5f3-4b92-a9eb-d6e01889f808 3260588 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c18b27 0xc002c18b28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-20 21:58:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.261: INFO: Pod "webserver-deployment-c7997dcc8-5ntl9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5ntl9 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-5ntl9 88c7225e-2a06-467e-9e70-f63fb9d092a4 3260476 0 2020-01-20 21:58:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c18ca7 0xc002c18ca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-20 21:58:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.261: INFO: Pod "webserver-deployment-c7997dcc8-6tjbw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6tjbw webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-6tjbw 9de97641-e6dc-4e28-8892-8121dd3200d1 3260586 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c18e27 0xc002c18e28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-20 21:58:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.262: INFO: Pod "webserver-deployment-c7997dcc8-86r9v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-86r9v webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-86r9v e2681695-1495-452f-81e9-69aec4b4d55f 3260561 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c18f97 0xc002c18f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.263: INFO: Pod "webserver-deployment-c7997dcc8-9c6s9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9c6s9 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-9c6s9 c6230748-5da0-4ec3-9eee-1a2db618b462 3260512 0 2020-01-20 21:58:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c190c7 0xc002c190c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-20 21:58:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.264: INFO: Pod "webserver-deployment-c7997dcc8-c9sh7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c9sh7 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-c9sh7 15f8db78-4869-44e4-8e71-0957e0e09a2d 3260557 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c19247 0xc002c19248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.264: INFO: Pod "webserver-deployment-c7997dcc8-dm6w8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dm6w8 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-dm6w8 bc51c89c-b9fa-4943-9ab8-1494018aa5fd 3260558 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c19377 0xc002c19378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.265: INFO: Pod "webserver-deployment-c7997dcc8-drpf2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-drpf2 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-drpf2 a02240bb-733a-4c60-aca6-5de9d60e553d 3260490 0 2020-01-20 21:58:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c19497 0xc002c19498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-20 21:58:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.266: INFO: Pod "webserver-deployment-c7997dcc8-fgtx2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fgtx2 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-fgtx2 8df7493e-2143-4e64-9858-c28f9f116ae3 3260538 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c19617 0xc002c19618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.266: INFO: Pod "webserver-deployment-c7997dcc8-hrq68" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hrq68 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-hrq68 bb506440-130f-4b08-b001-9d0acd4ff7db 3260577 0 2020-01-20 21:58:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c19737 0xc002c19738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.267: INFO: Pod "webserver-deployment-c7997dcc8-pkj24" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pkj24 webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-pkj24 2cc9fd7c-f910-45cc-af65-120b49372164 3260506 0 2020-01-20 21:58:41 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c19867 0xc002c19868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-20 21:58:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.268: INFO: Pod "webserver-deployment-c7997dcc8-s2h6h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s2h6h webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-s2h6h 135c7e60-4555-4d27-9d0b-9f99f7309f66 3260563 0 2020-01-20 21:58:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c199d7 0xc002c199d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 20 21:58:59.268: INFO: Pod "webserver-deployment-c7997dcc8-tvpwr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tvpwr webserver-deployment-c7997dcc8- deployment-6124 /api/v1/namespaces/deployment-6124/pods/webserver-deployment-c7997dcc8-tvpwr 1fc63ebc-87c7-47ce-ba41-46f482927764 3260485 0 2020-01-20 21:58:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 77a0256c-adab-4ce2-8005-8fdb7b50cdc0 0xc002c19b07 0xc002c19b08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-clv8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-clv8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-clv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 21:58:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-20 21:58:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:58:59.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6124" for this suite. • [SLOW TEST:48.807 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":152,"skipped":2415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:59:03.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 20 21:59:07.894: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 20 21:59:27.457: INFO: >>> kubeConfig: /root/.kube/config Jan 20 21:59:31.352: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:59:47.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7117" for this suite. • [SLOW TEST:44.805 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":153,"skipped":2449,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:59:48.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-00c8bcec-1441-414d-978a-e7ed4bf1b3be STEP: Creating a pod to test consume secrets Jan 20 21:59:50.716: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc" in namespace "projected-1542" to be "success or failure" Jan 20 21:59:51.127: INFO: Pod "pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc": Phase="Pending", Reason="", readiness=false. Elapsed: 410.20483ms Jan 20 21:59:53.240: INFO: Pod "pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.52303567s Jan 20 21:59:55.249: INFO: Pod "pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.532180709s Jan 20 21:59:57.260: INFO: Pod "pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543268929s Jan 20 21:59:59.271: INFO: Pod "pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.554693879s STEP: Saw pod success Jan 20 21:59:59.271: INFO: Pod "pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc" satisfied condition "success or failure" Jan 20 21:59:59.277: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc container projected-secret-volume-test: STEP: delete the pod Jan 20 21:59:59.376: INFO: Waiting for pod pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc to disappear Jan 20 21:59:59.384: INFO: Pod pod-projected-secrets-1bec0d9c-76ef-4346-856c-1ed07c66badc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 21:59:59.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1542" for this suite. • [SLOW TEST:11.275 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2451,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 21:59:59.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 20 21:59:59.524: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:00:14.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6784" for this suite. • [SLOW TEST:15.024 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2460,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:00:14.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-051825ae-47b6-467f-841f-80588a1d5b58 STEP: Creating a pod to test consume configMaps Jan 20 22:00:14.599: INFO: Waiting up to 5m0s for pod "pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d" in namespace "configmap-4521" to be "success or failure" Jan 20 22:00:14.626: INFO: Pod "pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.765451ms Jan 20 22:00:16.637: INFO: Pod "pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038331095s Jan 20 22:00:18.650: INFO: Pod "pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051238607s Jan 20 22:00:20.658: INFO: Pod "pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059511812s Jan 20 22:00:22.665: INFO: Pod "pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066236643s STEP: Saw pod success Jan 20 22:00:22.665: INFO: Pod "pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d" satisfied condition "success or failure" Jan 20 22:00:22.669: INFO: Trying to get logs from node jerma-node pod pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d container configmap-volume-test: STEP: delete the pod Jan 20 22:00:22.725: INFO: Waiting for pod pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d to disappear Jan 20 22:00:22.730: INFO: Pod pod-configmaps-09418070-1132-4faa-9195-0922b157fa0d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:00:22.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4521" for this suite. • [SLOW TEST:8.317 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2477,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:00:22.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9841.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9841.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9841.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9841.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9841.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9841.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 22:00:34.845: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:34.855: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:34.861: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:34.865: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:34.916: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:34.921: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:34.925: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:34.934: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:34.950: INFO: Lookups using dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local] Jan 20 22:00:39.958: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:39.961: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:39.965: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:39.969: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:39.982: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:39.988: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:39.992: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:39.998: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:40.015: INFO: Lookups using dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local] Jan 20 22:00:44.962: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:44.967: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:44.973: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:44.978: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:45.004: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:45.009: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:45.016: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:45.020: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:45.026: INFO: Lookups using dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local] Jan 20 22:00:49.984: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:49.990: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:49.994: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:49.998: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:50.025: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:50.028: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:50.035: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:50.039: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:50.047: INFO: Lookups using dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local] Jan 20 22:00:54.963: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:54.969: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:54.974: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:54.980: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:54.997: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:55.001: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:55.006: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:55.016: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:55.027: INFO: Lookups using dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local] Jan 20 22:00:59.961: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:59.967: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:59.973: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:00:59.981: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:01:00.078: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:01:00.091: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:01:00.100: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:01:00.107: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:01:00.118: INFO: Lookups using dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local jessie_udp@dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9841.svc.cluster.local] Jan 20 22:01:04.979: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:01:05.002: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local from pod dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e: the server could not find the requested resource (get pods dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e) Jan 20 22:01:05.019: INFO: Lookups using dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e failed for: [wheezy_udp@dns-test-service-2.dns-9841.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9841.svc.cluster.local] Jan 20 22:01:10.030: INFO: DNS probes using dns-9841/dns-test-47cc993e-c6b5-4906-99eb-eb7de359a14e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:01:10.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9841" for this suite. • [SLOW TEST:47.593 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":157,"skipped":2493,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:01:10.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 22:01:10.555: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41" in namespace "security-context-test-6698" to be "success or failure" Jan 20 22:01:10.572: INFO: Pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41": Phase="Pending", Reason="", readiness=false. Elapsed: 17.745282ms Jan 20 22:01:12.583: INFO: Pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028184857s Jan 20 22:01:14.597: INFO: Pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042191888s Jan 20 22:01:16.613: INFO: Pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058101895s Jan 20 22:01:18.622: INFO: Pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067207422s Jan 20 22:01:20.634: INFO: Pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078941337s Jan 20 22:01:20.634: INFO: Pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41" satisfied condition "success or failure" Jan 20 22:01:20.658: INFO: Got logs for pod "busybox-privileged-false-b81486b5-02e9-4a1a-b2c5-15d3e2e68a41": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:01:20.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6698" for this suite. • [SLOW TEST:10.337 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:01:20.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-93378b68-6c66-4817-94b6-7d0c9d42a018 STEP: Creating a pod to test consume secrets Jan 20 22:01:20.986: INFO: Waiting up to 5m0s for pod "pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe" in namespace "secrets-4128" to be "success or failure" Jan 20 22:01:21.178: INFO: Pod "pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe": Phase="Pending", Reason="", readiness=false. Elapsed: 191.784337ms Jan 20 22:01:23.185: INFO: Pod "pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198750384s Jan 20 22:01:25.193: INFO: Pod "pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207519507s Jan 20 22:01:27.210: INFO: Pod "pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223790079s Jan 20 22:01:29.218: INFO: Pod "pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.231582601s STEP: Saw pod success Jan 20 22:01:29.218: INFO: Pod "pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe" satisfied condition "success or failure" Jan 20 22:01:29.221: INFO: Trying to get logs from node jerma-node pod pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe container secret-volume-test: STEP: delete the pod Jan 20 22:01:29.318: INFO: Waiting for pod pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe to disappear Jan 20 22:01:29.347: INFO: Pod pod-secrets-59b4a4c9-9eea-4a4f-92b3-6b26d02efafe no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:01:29.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4128" for this suite. • [SLOW TEST:8.686 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2521,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:01:29.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-89dbd3de-8293-4665-a580-2becc606d87f STEP: Creating a pod to test consume configMaps Jan 20 22:01:29.516: INFO: Waiting up to 5m0s for pod "pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d" in namespace "configmap-3838" to be "success or failure" Jan 20 22:01:29.527: INFO: Pod "pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.401054ms Jan 20 22:01:31.535: INFO: Pod "pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018862263s Jan 20 22:01:33.547: INFO: Pod "pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030818762s Jan 20 22:01:35.558: INFO: Pod "pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041430593s Jan 20 22:01:37.564: INFO: Pod "pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047481903s STEP: Saw pod success Jan 20 22:01:37.564: INFO: Pod "pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d" satisfied condition "success or failure" Jan 20 22:01:37.567: INFO: Trying to get logs from node jerma-node pod pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d container configmap-volume-test: STEP: delete the pod Jan 20 22:01:37.715: INFO: Waiting for pod pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d to disappear Jan 20 22:01:37.727: INFO: Pod pod-configmaps-85b2fdc3-4a47-4149-b9a1-029169f4ec2d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:01:37.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3838" for this suite. • [SLOW TEST:8.392 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:01:37.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 22:01:38.823: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 22:01:40.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:01:42.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:01:44.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154498, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 22:01:47.944: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 20 22:01:56.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3127 to-be-attached-pod -i -c=container1' Jan 20 22:01:56.384: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:01:56.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3127" for this suite. STEP: Destroying namespace "webhook-3127-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.814 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":161,"skipped":2649,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:01:56.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 20 22:01:56.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7247' Jan 20 22:01:56.745: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 22:01:56.745: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jan 20 22:01:56.764: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 20 22:01:56.774: INFO: scanned /root for discovery docs: Jan 20 22:01:56.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7247' Jan 20 22:02:20.177: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 20 22:02:20.177: INFO: stdout: "Created e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e\nScaling up e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jan 20 22:02:20.177: INFO: stdout: "Created e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e\nScaling up e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jan 20 22:02:20.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7247' Jan 20 22:02:20.332: INFO: stderr: "" Jan 20 22:02:20.333: INFO: stdout: "e2e-test-httpd-rc-d94tw e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e-rtkpg " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Jan 20 22:02:25.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7247' Jan 20 22:02:25.486: INFO: stderr: "" Jan 20 22:02:25.486: INFO: stdout: "e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e-rtkpg " Jan 20 22:02:25.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e-rtkpg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7247' Jan 20 22:02:25.594: INFO: stderr: "" Jan 20 22:02:25.594: INFO: stdout: "true" Jan 20 22:02:25.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e-rtkpg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7247' Jan 20 22:02:25.678: INFO: stderr: "" Jan 20 22:02:25.679: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jan 20 22:02:25.679: INFO: e2e-test-httpd-rc-d975ba1289fe71c44f41eb91daa8081e-rtkpg is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678 Jan 20 22:02:25.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7247' Jan 20 22:02:25.779: INFO: stderr: "" Jan 20 22:02:25.780: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:02:25.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7247" for this suite. • [SLOW TEST:29.216 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":162,"skipped":2655,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:02:25.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-1d5a0901-b452-45ab-94fa-fa3404134852 STEP: Creating configMap with name cm-test-opt-upd-2c9577f6-8c64-43a4-ba60-9e19d275a930 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1d5a0901-b452-45ab-94fa-fa3404134852 STEP: Updating configmap cm-test-opt-upd-2c9577f6-8c64-43a4-ba60-9e19d275a930 STEP: Creating configMap with name cm-test-opt-create-98befd38-fde6-43fc-b150-c70524986ebc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:04:09.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2094" for this suite. • [SLOW TEST:103.975 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2664,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:04:09.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:04:26.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9291" for this suite. • [SLOW TEST:16.582 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":164,"skipped":2665,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:04:26.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:04:26.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6679" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":165,"skipped":2678,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:04:26.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-d4wj STEP: Creating a pod to test atomic-volume-subpath Jan 20 22:04:26.786: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d4wj" in namespace "subpath-797" to be "success or failure" Jan 20 22:04:26.807: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Pending", Reason="", readiness=false. Elapsed: 21.195929ms Jan 20 22:04:28.815: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02908493s Jan 20 22:04:30.826: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039845202s Jan 20 22:04:32.835: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049695689s Jan 20 22:04:34.857: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071191969s Jan 20 22:04:36.865: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 10.078953831s Jan 20 22:04:38.876: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 12.090225677s Jan 20 22:04:40.885: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 14.099039281s Jan 20 22:04:42.892: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 16.106264969s Jan 20 22:04:44.901: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 18.11506121s Jan 20 22:04:46.919: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 20.133291494s Jan 20 22:04:48.928: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 22.142205148s Jan 20 22:04:50.935: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 24.149281794s Jan 20 22:04:52.944: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 26.158262408s Jan 20 22:04:55.520: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Running", Reason="", readiness=true. Elapsed: 28.734163837s Jan 20 22:04:57.533: INFO: Pod "pod-subpath-test-configmap-d4wj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.747441809s STEP: Saw pod success Jan 20 22:04:57.534: INFO: Pod "pod-subpath-test-configmap-d4wj" satisfied condition "success or failure" Jan 20 22:04:57.539: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-d4wj container test-container-subpath-configmap-d4wj: STEP: delete the pod Jan 20 22:04:57.697: INFO: Waiting for pod pod-subpath-test-configmap-d4wj to disappear Jan 20 22:04:57.707: INFO: Pod pod-subpath-test-configmap-d4wj no longer exists STEP: Deleting pod pod-subpath-test-configmap-d4wj Jan 20 22:04:57.707: INFO: Deleting pod "pod-subpath-test-configmap-d4wj" in namespace "subpath-797" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:04:57.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-797" for this suite. • [SLOW TEST:31.121 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":166,"skipped":2699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:04:57.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Jan 20 22:04:57.870: INFO: Waiting up to 5m0s for pod "pod-004a9aae-5acb-4cea-8db5-f1b510e0d848" in namespace "emptydir-3162" to be "success or failure" Jan 20 22:04:58.095: INFO: Pod "pod-004a9aae-5acb-4cea-8db5-f1b510e0d848": Phase="Pending", Reason="", readiness=false. Elapsed: 224.803295ms Jan 20 22:05:00.102: INFO: Pod "pod-004a9aae-5acb-4cea-8db5-f1b510e0d848": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231980187s Jan 20 22:05:02.116: INFO: Pod "pod-004a9aae-5acb-4cea-8db5-f1b510e0d848": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246029457s Jan 20 22:05:04.128: INFO: Pod "pod-004a9aae-5acb-4cea-8db5-f1b510e0d848": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257672832s Jan 20 22:05:06.139: INFO: Pod "pod-004a9aae-5acb-4cea-8db5-f1b510e0d848": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.268944245s STEP: Saw pod success Jan 20 22:05:06.139: INFO: Pod "pod-004a9aae-5acb-4cea-8db5-f1b510e0d848" satisfied condition "success or failure" Jan 20 22:05:06.144: INFO: Trying to get logs from node jerma-node pod pod-004a9aae-5acb-4cea-8db5-f1b510e0d848 container test-container: STEP: delete the pod Jan 20 22:05:06.331: INFO: Waiting for pod pod-004a9aae-5acb-4cea-8db5-f1b510e0d848 to disappear Jan 20 22:05:06.346: INFO: Pod pod-004a9aae-5acb-4cea-8db5-f1b510e0d848 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:05:06.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3162" for this suite. • [SLOW TEST:8.633 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2789,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:05:06.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1 Jan 20 22:05:06.514: INFO: Pod name my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1: Found 0 pods out of 1 Jan 20 22:05:11.521: INFO: Pod name my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1: Found 1 pods out of 1 Jan 20 22:05:11.521: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1" are running Jan 20 22:05:13.534: INFO: Pod "my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1-crslf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 22:05:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 22:05:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 22:05:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 22:05:06 +0000 UTC Reason: Message:}]) Jan 20 22:05:13.535: INFO: Trying to dial the pod Jan 20 22:05:18.574: INFO: Controller my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1: Got expected result from replica 1 [my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1-crslf]: "my-hostname-basic-05db2900-b9f9-410a-9e91-4b58c28869f1-crslf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:05:18.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4068" for this suite. • [SLOW TEST:12.228 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":168,"skipped":2802,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:05:18.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8687 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8687 STEP: creating replication controller externalsvc in namespace services-8687 I0120 22:05:18.820962 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8687, replica count: 2 I0120 22:05:21.873226 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:05:24.876008 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:05:27.877274 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:05:30.878585 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 20 22:05:30.932: INFO: Creating new exec pod Jan 20 22:05:37.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8687 execpodfxgvs -- /bin/sh -x -c nslookup clusterip-service' Jan 20 22:05:39.390: INFO: stderr: "I0120 22:05:39.123221 2225 log.go:172] (0xc000bfe000) (0xc000447680) Create stream\nI0120 22:05:39.123427 2225 log.go:172] (0xc000bfe000) (0xc000447680) Stream added, broadcasting: 1\nI0120 22:05:39.129019 2225 log.go:172] (0xc000bfe000) Reply frame received for 1\nI0120 22:05:39.129053 2225 log.go:172] (0xc000bfe000) (0xc00070bf40) Create stream\nI0120 22:05:39.129062 2225 log.go:172] (0xc000bfe000) (0xc00070bf40) Stream added, broadcasting: 3\nI0120 22:05:39.130644 2225 log.go:172] (0xc000bfe000) Reply frame received for 3\nI0120 22:05:39.130675 2225 log.go:172] (0xc000bfe000) (0xc000654820) Create stream\nI0120 22:05:39.130688 2225 log.go:172] (0xc000bfe000) (0xc000654820) Stream added, broadcasting: 5\nI0120 22:05:39.131970 2225 log.go:172] (0xc000bfe000) Reply frame received for 5\nI0120 22:05:39.207217 2225 log.go:172] (0xc000bfe000) Data frame received for 5\nI0120 22:05:39.207293 2225 log.go:172] (0xc000654820) (5) Data frame handling\nI0120 22:05:39.207308 2225 log.go:172] (0xc000654820) (5) Data frame sent\n+ nslookup clusterip-service\nI0120 22:05:39.225666 2225 log.go:172] (0xc000bfe000) Data frame received for 3\nI0120 22:05:39.225744 2225 log.go:172] (0xc00070bf40) (3) Data frame handling\nI0120 22:05:39.225783 2225 log.go:172] (0xc00070bf40) (3) Data frame sent\nI0120 22:05:39.363084 2225 log.go:172] (0xc000bfe000) Data frame received for 1\nI0120 22:05:39.363326 2225 log.go:172] (0xc000bfe000) (0xc00070bf40) Stream removed, broadcasting: 3\nI0120 22:05:39.363440 2225 log.go:172] (0xc000447680) (1) Data frame handling\nI0120 22:05:39.363472 2225 log.go:172] (0xc000447680) (1) Data frame sent\nI0120 22:05:39.363555 2225 log.go:172] (0xc000bfe000) (0xc000654820) Stream removed, broadcasting: 5\nI0120 22:05:39.363609 2225 log.go:172] (0xc000bfe000) (0xc000447680) Stream removed, broadcasting: 1\nI0120 22:05:39.363635 2225 log.go:172] (0xc000bfe000) Go away received\nI0120 22:05:39.366736 2225 log.go:172] (0xc000bfe000) (0xc000447680) Stream removed, broadcasting: 1\nI0120 22:05:39.366788 2225 log.go:172] (0xc000bfe000) (0xc00070bf40) Stream removed, broadcasting: 3\nI0120 22:05:39.366843 2225 log.go:172] (0xc000bfe000) (0xc000654820) Stream removed, broadcasting: 5\n" Jan 20 22:05:39.391: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8687.svc.cluster.local\tcanonical name = externalsvc.services-8687.svc.cluster.local.\nName:\texternalsvc.services-8687.svc.cluster.local\nAddress: 10.96.196.150\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8687, will wait for the garbage collector to delete the pods Jan 20 22:05:39.456: INFO: Deleting ReplicationController externalsvc took: 9.011035ms Jan 20 22:05:39.857: INFO: Terminating ReplicationController externalsvc pods took: 401.183184ms Jan 20 22:05:53.197: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:05:53.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8687" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:34.650 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":169,"skipped":2829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:05:53.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4265.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4265.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4265.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4265.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 91.35.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.35.91_udp@PTR;check="$$(dig +tcp +noall +answer +search 91.35.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.35.91_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4265.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4265.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4265.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4265.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 91.35.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.35.91_udp@PTR;check="$$(dig +tcp +noall +answer +search 91.35.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.35.91_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 22:06:07.542: INFO: Unable to read wheezy_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:07.548: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:07.555: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:07.559: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:07.585: INFO: Unable to read jessie_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:07.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:07.594: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:07.599: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:07.627: INFO: Lookups using dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83 failed for: [wheezy_udp@dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_udp@dns-test-service.dns-4265.svc.cluster.local jessie_tcp@dns-test-service.dns-4265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local] Jan 20 22:06:12.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:12.641: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:12.648: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:12.660: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:12.701: INFO: Unable to read jessie_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:12.706: INFO: Unable to read jessie_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:12.740: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:12.746: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:12.815: INFO: Lookups using dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83 failed for: [wheezy_udp@dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_udp@dns-test-service.dns-4265.svc.cluster.local jessie_tcp@dns-test-service.dns-4265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local] Jan 20 22:06:17.638: INFO: Unable to read wheezy_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:17.643: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:17.651: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:17.657: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:17.695: INFO: Unable to read jessie_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:17.702: INFO: Unable to read jessie_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:17.706: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:17.712: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:17.751: INFO: Lookups using dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83 failed for: [wheezy_udp@dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_udp@dns-test-service.dns-4265.svc.cluster.local jessie_tcp@dns-test-service.dns-4265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local] Jan 20 22:06:22.633: INFO: Unable to read wheezy_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:22.636: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:22.639: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:22.641: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:22.661: INFO: Unable to read jessie_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:22.664: INFO: Unable to read jessie_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:22.667: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:22.669: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:22.692: INFO: Lookups using dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83 failed for: [wheezy_udp@dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_udp@dns-test-service.dns-4265.svc.cluster.local jessie_tcp@dns-test-service.dns-4265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local] Jan 20 22:06:27.637: INFO: Unable to read wheezy_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:27.643: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:27.649: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:27.655: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:27.694: INFO: Unable to read jessie_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:27.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:27.702: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:27.707: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:27.739: INFO: Lookups using dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83 failed for: [wheezy_udp@dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_udp@dns-test-service.dns-4265.svc.cluster.local jessie_tcp@dns-test-service.dns-4265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local] Jan 20 22:06:32.646: INFO: Unable to read wheezy_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:32.665: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:32.679: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:32.684: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:32.744: INFO: Unable to read jessie_udp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:32.748: INFO: Unable to read jessie_tcp@dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:32.757: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:32.762: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local from pod dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83: the server could not find the requested resource (get pods dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83) Jan 20 22:06:32.803: INFO: Lookups using dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83 failed for: [wheezy_udp@dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@dns-test-service.dns-4265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_udp@dns-test-service.dns-4265.svc.cluster.local jessie_tcp@dns-test-service.dns-4265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4265.svc.cluster.local] Jan 20 22:06:37.702: INFO: DNS probes using dns-4265/dns-test-1e09064f-31b5-482d-8588-c0df15f4ba83 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:06:38.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4265" for this suite. • [SLOW TEST:44.967 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":170,"skipped":2886,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:06:38.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 22:06:38.747: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 22:06:40.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:06:42.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:06:44.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:06:46.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154798, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 22:06:49.810: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:06:49.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9686" for this suite. STEP: Destroying namespace "webhook-9686-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.913 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":171,"skipped":2886,"failed":0} S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:06:50.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 20 22:06:58.744: INFO: Successfully updated pod "pod-update-61080d95-651c-4292-b6cf-b95f3c721492" STEP: verifying the updated pod is in kubernetes Jan 20 22:06:58.754: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:06:58.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8122" for this suite. • [SLOW TEST:8.640 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2887,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:06:58.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 20 22:06:58.871: INFO: Waiting up to 5m0s for pod "downward-api-63814337-8209-4ec3-b723-b1cab64a1d01" in namespace "downward-api-4902" to be "success or failure" Jan 20 22:06:58.895: INFO: Pod "downward-api-63814337-8209-4ec3-b723-b1cab64a1d01": Phase="Pending", Reason="", readiness=false. Elapsed: 23.833112ms Jan 20 22:07:00.904: INFO: Pod "downward-api-63814337-8209-4ec3-b723-b1cab64a1d01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032914466s Jan 20 22:07:02.914: INFO: Pod "downward-api-63814337-8209-4ec3-b723-b1cab64a1d01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043623472s Jan 20 22:07:04.923: INFO: Pod "downward-api-63814337-8209-4ec3-b723-b1cab64a1d01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052163415s Jan 20 22:07:06.930: INFO: Pod "downward-api-63814337-8209-4ec3-b723-b1cab64a1d01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059009388s Jan 20 22:07:08.939: INFO: Pod "downward-api-63814337-8209-4ec3-b723-b1cab64a1d01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068547551s STEP: Saw pod success Jan 20 22:07:08.940: INFO: Pod "downward-api-63814337-8209-4ec3-b723-b1cab64a1d01" satisfied condition "success or failure" Jan 20 22:07:08.945: INFO: Trying to get logs from node jerma-node pod downward-api-63814337-8209-4ec3-b723-b1cab64a1d01 container dapi-container: STEP: delete the pod Jan 20 22:07:09.016: INFO: Waiting for pod downward-api-63814337-8209-4ec3-b723-b1cab64a1d01 to disappear Jan 20 22:07:09.023: INFO: Pod downward-api-63814337-8209-4ec3-b723-b1cab64a1d01 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:07:09.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4902" for this suite. • [SLOW TEST:10.272 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:07:09.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 20 22:07:09.227: INFO: Waiting up to 5m0s for pod "pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039" in namespace "emptydir-9253" to be "success or failure" Jan 20 22:07:09.274: INFO: Pod "pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039": Phase="Pending", Reason="", readiness=false. Elapsed: 46.633013ms Jan 20 22:07:11.283: INFO: Pod "pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05567288s Jan 20 22:07:13.294: INFO: Pod "pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06632968s Jan 20 22:07:15.302: INFO: Pod "pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074572277s Jan 20 22:07:17.368: INFO: Pod "pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.140488587s STEP: Saw pod success Jan 20 22:07:17.368: INFO: Pod "pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039" satisfied condition "success or failure" Jan 20 22:07:17.372: INFO: Trying to get logs from node jerma-node pod pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039 container test-container: STEP: delete the pod Jan 20 22:07:17.564: INFO: Waiting for pod pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039 to disappear Jan 20 22:07:17.574: INFO: Pod pod-d8c48fe0-ca22-4997-8c8b-d7fbda135039 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:07:17.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9253" for this suite. • [SLOW TEST:8.546 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:07:17.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 20 22:07:17.742: INFO: Number of nodes with available pods: 0 Jan 20 22:07:17.742: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:19.511: INFO: Number of nodes with available pods: 0 Jan 20 22:07:19.511: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:19.939: INFO: Number of nodes with available pods: 0 Jan 20 22:07:19.939: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:20.757: INFO: Number of nodes with available pods: 0 Jan 20 22:07:20.758: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:21.759: INFO: Number of nodes with available pods: 0 Jan 20 22:07:21.759: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:22.794: INFO: Number of nodes with available pods: 0 Jan 20 22:07:22.794: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:24.464: INFO: Number of nodes with available pods: 0 Jan 20 22:07:24.465: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:24.915: INFO: Number of nodes with available pods: 0 Jan 20 22:07:24.915: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:25.763: INFO: Number of nodes with available pods: 0 Jan 20 22:07:25.764: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:07:26.759: INFO: Number of nodes with available pods: 1 Jan 20 22:07:26.759: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:07:27.757: INFO: Number of nodes with available pods: 2 Jan 20 22:07:27.757: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 20 22:07:27.847: INFO: Number of nodes with available pods: 2 Jan 20 22:07:27.847: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6062, will wait for the garbage collector to delete the pods Jan 20 22:07:28.966: INFO: Deleting DaemonSet.extensions daemon-set took: 14.420084ms Jan 20 22:07:29.367: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.760808ms Jan 20 22:07:43.277: INFO: Number of nodes with available pods: 0 Jan 20 22:07:43.277: INFO: Number of running nodes: 0, number of available pods: 0 Jan 20 22:07:43.287: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6062/daemonsets","resourceVersion":"3262902"},"items":null} Jan 20 22:07:43.312: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6062/pods","resourceVersion":"3262902"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:07:43.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6062" for this suite. • [SLOW TEST:25.751 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":175,"skipped":2997,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:07:43.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 22:07:43.409: INFO: Creating deployment "test-recreate-deployment" Jan 20 22:07:43.459: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 20 22:07:43.509: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 20 22:07:45.523: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 20 22:07:45.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:07:47.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:07:49.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154863, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:07:51.534: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 20 22:07:51.544: INFO: Updating deployment test-recreate-deployment Jan 20 22:07:51.544: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 20 22:07:51.849: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5921 /apis/apps/v1/namespaces/deployment-5921/deployments/test-recreate-deployment 6fad09bd-a376-4393-a41f-4abe6dc1b40f 3262987 2 2020-01-20 22:07:43 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027c0468 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-20 22:07:51 +0000 UTC,LastTransitionTime:2020-01-20 22:07:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-20 22:07:51 +0000 UTC,LastTransitionTime:2020-01-20 22:07:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 20 22:07:51.862: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5921 /apis/apps/v1/namespaces/deployment-5921/replicasets/test-recreate-deployment-5f94c574ff f303a012-6e53-4433-9f5e-447f0e9f5be4 3262986 1 2020-01-20 22:07:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 6fad09bd-a376-4393-a41f-4abe6dc1b40f 0xc0027c1057 0xc0027c1058}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027c10c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 20 22:07:51.863: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 20 22:07:51.865: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-5921 /apis/apps/v1/namespaces/deployment-5921/replicasets/test-recreate-deployment-799c574856 043ed084-f690-4ce8-b158-4ed46f471108 3262976 2 2020-01-20 22:07:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 6fad09bd-a376-4393-a41f-4abe6dc1b40f 0xc0027c1137 0xc0027c1138}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027c11a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 20 22:07:51.877: INFO: Pod "test-recreate-deployment-5f94c574ff-h5hzz" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-h5hzz test-recreate-deployment-5f94c574ff- deployment-5921 /api/v1/namespaces/deployment-5921/pods/test-recreate-deployment-5f94c574ff-h5hzz 75c883bb-bc79-4d80-8f89-e5eecc7b27ff 3262989 0 2020-01-20 22:07:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff f303a012-6e53-4433-9f5e-447f0e9f5be4 0xc0030fb377 0xc0030fb378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z8vbg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z8vbg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z8vbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:07:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:07:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:07:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:07:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-20 22:07:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:07:51.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5921" for this suite. • [SLOW TEST:8.624 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":176,"skipped":3012,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:07:51.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 22:07:52.750: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 22:07:54.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:07:56.849: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:07:58.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:08:00.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:08:03.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715154872, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 22:08:05.911: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 22:08:05.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4949-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:08:07.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6118" for this suite. STEP: Destroying namespace "webhook-6118-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.617 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":177,"skipped":3022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:08:07.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:08:14.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6099" for this suite. • [SLOW TEST:7.180 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":178,"skipped":3062,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:08:14.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:08:22.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3420" for this suite. • [SLOW TEST:8.215 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":3066,"failed":0} SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:08:22.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9398 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9398 to expose endpoints map[] Jan 20 22:08:23.210: INFO: successfully validated that service multi-endpoint-test in namespace services-9398 exposes endpoints map[] (18.212722ms elapsed) STEP: Creating pod pod1 in namespace services-9398 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9398 to expose endpoints map[pod1:[100]] Jan 20 22:08:27.485: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.25866418s elapsed, will retry) Jan 20 22:08:32.612: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.385170864s elapsed, will retry) Jan 20 22:08:33.626: INFO: successfully validated that service multi-endpoint-test in namespace services-9398 exposes endpoints map[pod1:[100]] (10.399953253s elapsed) STEP: Creating pod pod2 in namespace services-9398 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9398 to expose endpoints map[pod1:[100] pod2:[101]] Jan 20 22:08:38.681: INFO: Unexpected endpoints: found map[fac5b34c-4e29-4a5c-932d-1000cd685283:[100]], expected map[pod1:[100] pod2:[101]] (5.04120066s elapsed, will retry) Jan 20 22:08:41.969: INFO: successfully validated that service multi-endpoint-test in namespace services-9398 exposes endpoints map[pod1:[100] pod2:[101]] (8.329117818s elapsed) STEP: Deleting pod pod1 in namespace services-9398 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9398 to expose endpoints map[pod2:[101]] Jan 20 22:08:43.016: INFO: successfully validated that service multi-endpoint-test in namespace services-9398 exposes endpoints map[pod2:[101]] (1.041246145s elapsed) STEP: Deleting pod pod2 in namespace services-9398 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9398 to expose endpoints map[] Jan 20 22:08:43.089: INFO: successfully validated that service multi-endpoint-test in namespace services-9398 exposes endpoints map[] (60.710931ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:08:43.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9398" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:20.300 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":180,"skipped":3069,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:08:43.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:08:51.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8128" for this suite. STEP: Destroying namespace "nsdeletetest-6111" for this suite. Jan 20 22:08:51.389: INFO: Namespace nsdeletetest-6111 was already deleted STEP: Destroying namespace "nsdeletetest-9287" for this suite. • [SLOW TEST:8.108 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":181,"skipped":3080,"failed":0} [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:08:51.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 20 22:08:51.551: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9526 /api/v1/namespaces/watch-9526/configmaps/e2e-watch-test-resource-version 642d6902-c9b5-434f-8dd9-82fd9c99d2a4 3263344 0 2020-01-20 22:08:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 20 22:08:51.552: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9526 /api/v1/namespaces/watch-9526/configmaps/e2e-watch-test-resource-version 642d6902-c9b5-434f-8dd9-82fd9c99d2a4 3263345 0 2020-01-20 22:08:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:08:51.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9526" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":182,"skipped":3080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:08:51.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-d97065ce-58e2-4335-89ed-e8966fafb2eb [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:08:51.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-346" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":183,"skipped":3124,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:08:51.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jan 20 22:08:52.350: INFO: created pod pod-service-account-defaultsa Jan 20 22:08:52.350: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 20 22:08:52.364: INFO: created pod pod-service-account-mountsa Jan 20 22:08:52.364: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 20 22:08:52.451: INFO: created pod pod-service-account-nomountsa Jan 20 22:08:52.451: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 20 22:08:52.464: INFO: created pod pod-service-account-defaultsa-mountspec Jan 20 22:08:52.464: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 20 22:08:52.514: INFO: created pod pod-service-account-mountsa-mountspec Jan 20 22:08:52.514: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 20 22:08:52.590: INFO: created pod pod-service-account-nomountsa-mountspec Jan 20 22:08:52.590: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 20 22:08:52.772: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 20 22:08:52.773: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 20 22:08:52.842: INFO: created pod pod-service-account-mountsa-nomountspec Jan 20 22:08:52.842: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 20 22:08:52.858: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 20 22:08:52.858: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:08:52.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4422" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":184,"skipped":3127,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:08:54.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Jan 20 22:08:57.138: INFO: Waiting up to 5m0s for pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2" in namespace "containers-3819" to be "success or failure" Jan 20 22:08:57.501: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 363.371653ms Jan 20 22:08:59.653: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.514931233s Jan 20 22:09:01.744: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.605856067s Jan 20 22:09:05.664: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.525610736s Jan 20 22:09:09.245: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.107339952s Jan 20 22:09:11.262: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.123543188s Jan 20 22:09:14.346: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.208408279s Jan 20 22:09:16.732: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.59441922s Jan 20 22:09:18.765: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.626613338s Jan 20 22:09:21.527: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.389427031s STEP: Saw pod success Jan 20 22:09:21.528: INFO: Pod "client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2" satisfied condition "success or failure" Jan 20 22:09:21.545: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2 container test-container: STEP: delete the pod Jan 20 22:09:22.416: INFO: Waiting for pod client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2 to disappear Jan 20 22:09:22.421: INFO: Pod client-containers-1e9e2834-e5dc-4009-bb89-2e27c6b987d2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:09:22.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3819" for this suite. • [SLOW TEST:28.121 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:09:22.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 22:09:22.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10" in namespace "downward-api-8944" to be "success or failure" Jan 20 22:09:22.722: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10": Phase="Pending", Reason="", readiness=false. Elapsed: 101.022466ms Jan 20 22:09:24.738: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117456642s Jan 20 22:09:26.819: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198646856s Jan 20 22:09:28.831: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210002335s Jan 20 22:09:30.838: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217688001s Jan 20 22:09:32.845: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224016062s Jan 20 22:09:34.852: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10": Phase="Pending", Reason="", readiness=false. Elapsed: 12.231400241s Jan 20 22:09:36.868: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.247291642s STEP: Saw pod success Jan 20 22:09:36.868: INFO: Pod "downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10" satisfied condition "success or failure" Jan 20 22:09:36.875: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10 container client-container: STEP: delete the pod Jan 20 22:09:36.973: INFO: Waiting for pod downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10 to disappear Jan 20 22:09:36.984: INFO: Pod downwardapi-volume-b88e30d5-222c-47b7-8856-993be4d03f10 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:09:36.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8944" for this suite. • [SLOW TEST:14.558 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3164,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:09:37.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-33222756-2cb7-459b-874b-26a063b51d41 STEP: Creating a pod to test consume configMaps Jan 20 22:09:37.152: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb" in namespace "projected-2111" to be "success or failure" Jan 20 22:09:37.189: INFO: Pod "pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.745697ms Jan 20 22:09:39.199: INFO: Pod "pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047146763s Jan 20 22:09:41.207: INFO: Pod "pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055336257s Jan 20 22:09:43.215: INFO: Pod "pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063020392s Jan 20 22:09:45.220: INFO: Pod "pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068049636s Jan 20 22:09:47.226: INFO: Pod "pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074153848s STEP: Saw pod success Jan 20 22:09:47.226: INFO: Pod "pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb" satisfied condition "success or failure" Jan 20 22:09:47.232: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb container projected-configmap-volume-test: STEP: delete the pod Jan 20 22:09:47.351: INFO: Waiting for pod pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb to disappear Jan 20 22:09:47.363: INFO: Pod pod-projected-configmaps-c81f4c0a-62a3-4000-a250-94554e821acb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:09:47.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2111" for this suite. • [SLOW TEST:10.379 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:09:47.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 20 22:09:56.053: INFO: Successfully updated pod "adopt-release-ssdxn" STEP: Checking that the Job readopts the Pod Jan 20 22:09:56.053: INFO: Waiting up to 15m0s for pod "adopt-release-ssdxn" in namespace "job-9545" to be "adopted" Jan 20 22:09:56.085: INFO: Pod "adopt-release-ssdxn": Phase="Running", Reason="", readiness=true. Elapsed: 31.294134ms Jan 20 22:09:58.100: INFO: Pod "adopt-release-ssdxn": Phase="Running", Reason="", readiness=true. Elapsed: 2.04620531s Jan 20 22:09:58.100: INFO: Pod "adopt-release-ssdxn" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 20 22:09:58.624: INFO: Successfully updated pod "adopt-release-ssdxn" STEP: Checking that the Job releases the Pod Jan 20 22:09:58.624: INFO: Waiting up to 15m0s for pod "adopt-release-ssdxn" in namespace "job-9545" to be "released" Jan 20 22:09:58.685: INFO: Pod "adopt-release-ssdxn": Phase="Running", Reason="", readiness=true. Elapsed: 60.928824ms Jan 20 22:09:58.685: INFO: Pod "adopt-release-ssdxn" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:09:58.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9545" for this suite. • [SLOW TEST:11.333 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":188,"skipped":3218,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:09:58.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2883 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2883 I0120 22:09:59.031324 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2883, replica count: 2 I0120 22:10:02.082775 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:10:05.083970 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:10:08.085960 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:10:11.086969 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 20 22:10:11.087: INFO: Creating new exec pod Jan 20 22:10:20.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2883 execpodk9ff5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 20 22:10:20.732: INFO: stderr: "I0120 22:10:20.507581 2261 log.go:172] (0xc00044b3f0) (0xc000904500) Create stream\nI0120 22:10:20.508012 2261 log.go:172] (0xc00044b3f0) (0xc000904500) Stream added, broadcasting: 1\nI0120 22:10:20.526479 2261 log.go:172] (0xc00044b3f0) Reply frame received for 1\nI0120 22:10:20.526859 2261 log.go:172] (0xc00044b3f0) (0xc0007655e0) Create stream\nI0120 22:10:20.526922 2261 log.go:172] (0xc00044b3f0) (0xc0007655e0) Stream added, broadcasting: 3\nI0120 22:10:20.529157 2261 log.go:172] (0xc00044b3f0) Reply frame received for 3\nI0120 22:10:20.529221 2261 log.go:172] (0xc00044b3f0) (0xc0005b6820) Create stream\nI0120 22:10:20.529239 2261 log.go:172] (0xc00044b3f0) (0xc0005b6820) Stream added, broadcasting: 5\nI0120 22:10:20.531158 2261 log.go:172] (0xc00044b3f0) Reply frame received for 5\nI0120 22:10:20.617379 2261 log.go:172] (0xc00044b3f0) Data frame received for 5\nI0120 22:10:20.617524 2261 log.go:172] (0xc0005b6820) (5) Data frame handling\nI0120 22:10:20.617603 2261 log.go:172] (0xc0005b6820) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0120 22:10:20.619818 2261 log.go:172] (0xc00044b3f0) Data frame received for 5\nI0120 22:10:20.619835 2261 log.go:172] (0xc0005b6820) (5) Data frame handling\nI0120 22:10:20.619853 2261 log.go:172] (0xc0005b6820) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0120 22:10:20.713699 2261 log.go:172] (0xc00044b3f0) (0xc0007655e0) Stream removed, broadcasting: 3\nI0120 22:10:20.713937 2261 log.go:172] (0xc00044b3f0) Data frame received for 1\nI0120 22:10:20.713965 2261 log.go:172] (0xc00044b3f0) (0xc0005b6820) Stream removed, broadcasting: 5\nI0120 22:10:20.714025 2261 log.go:172] (0xc000904500) (1) Data frame handling\nI0120 22:10:20.714056 2261 log.go:172] (0xc000904500) (1) Data frame sent\nI0120 22:10:20.714068 2261 log.go:172] (0xc00044b3f0) (0xc000904500) Stream removed, broadcasting: 1\nI0120 22:10:20.714100 2261 log.go:172] (0xc00044b3f0) Go away received\nI0120 22:10:20.715893 2261 log.go:172] (0xc00044b3f0) (0xc000904500) Stream removed, broadcasting: 1\nI0120 22:10:20.716019 2261 log.go:172] (0xc00044b3f0) (0xc0007655e0) Stream removed, broadcasting: 3\nI0120 22:10:20.716041 2261 log.go:172] (0xc00044b3f0) (0xc0005b6820) Stream removed, broadcasting: 5\n" Jan 20 22:10:20.732: INFO: stdout: "" Jan 20 22:10:20.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2883 execpodk9ff5 -- /bin/sh -x -c nc -zv -t -w 2 10.96.252.234 80' Jan 20 22:10:21.138: INFO: stderr: "I0120 22:10:20.953322 2284 log.go:172] (0xc000bda210) (0xc000b161e0) Create stream\nI0120 22:10:20.953840 2284 log.go:172] (0xc000bda210) (0xc000b161e0) Stream added, broadcasting: 1\nI0120 22:10:20.960808 2284 log.go:172] (0xc000bda210) Reply frame received for 1\nI0120 22:10:20.960874 2284 log.go:172] (0xc000bda210) (0xc000a980a0) Create stream\nI0120 22:10:20.960890 2284 log.go:172] (0xc000bda210) (0xc000a980a0) Stream added, broadcasting: 3\nI0120 22:10:20.962658 2284 log.go:172] (0xc000bda210) Reply frame received for 3\nI0120 22:10:20.962704 2284 log.go:172] (0xc000bda210) (0xc000b16280) Create stream\nI0120 22:10:20.962714 2284 log.go:172] (0xc000bda210) (0xc000b16280) Stream added, broadcasting: 5\nI0120 22:10:20.963806 2284 log.go:172] (0xc000bda210) Reply frame received for 5\nI0120 22:10:21.034092 2284 log.go:172] (0xc000bda210) Data frame received for 5\nI0120 22:10:21.034161 2284 log.go:172] (0xc000b16280) (5) Data frame handling\nI0120 22:10:21.034184 2284 log.go:172] (0xc000b16280) (5) Data frame sent\nI0120 22:10:21.034200 2284 log.go:172] (0xc000bda210) Data frame received for 5\nI0120 22:10:21.034209 2284 log.go:172] (0xc000b16280) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.252.234 80\nI0120 22:10:21.034285 2284 log.go:172] (0xc000b16280) (5) Data frame sent\nI0120 22:10:21.043603 2284 log.go:172] (0xc000bda210) Data frame received for 5\nI0120 22:10:21.043662 2284 log.go:172] (0xc000b16280) (5) Data frame handling\nI0120 22:10:21.043676 2284 log.go:172] (0xc000b16280) (5) Data frame sent\nConnection to 10.96.252.234 80 port [tcp/http] succeeded!\nI0120 22:10:21.119174 2284 log.go:172] (0xc000bda210) (0xc000a980a0) Stream removed, broadcasting: 3\nI0120 22:10:21.119367 2284 log.go:172] (0xc000bda210) Data frame received for 1\nI0120 22:10:21.119406 2284 log.go:172] (0xc000b161e0) (1) Data frame handling\nI0120 22:10:21.119453 2284 log.go:172] (0xc000b161e0) (1) Data frame sent\nI0120 22:10:21.119462 2284 log.go:172] (0xc000bda210) (0xc000b161e0) Stream removed, broadcasting: 1\nI0120 22:10:21.120597 2284 log.go:172] (0xc000bda210) (0xc000b16280) Stream removed, broadcasting: 5\nI0120 22:10:21.120660 2284 log.go:172] (0xc000bda210) Go away received\nI0120 22:10:21.120813 2284 log.go:172] (0xc000bda210) (0xc000b161e0) Stream removed, broadcasting: 1\nI0120 22:10:21.120847 2284 log.go:172] (0xc000bda210) (0xc000a980a0) Stream removed, broadcasting: 3\nI0120 22:10:21.120893 2284 log.go:172] (0xc000bda210) (0xc000b16280) Stream removed, broadcasting: 5\n" Jan 20 22:10:21.138: INFO: stdout: "" Jan 20 22:10:21.138: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:10:21.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2883" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:22.480 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":189,"skipped":3238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:10:21.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 20 22:10:32.756: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:10:33.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4060" for this suite. • [SLOW TEST:11.904 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3276,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:10:33.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3becabdd-9685-42e5-8d03-35725bd108bc STEP: Creating a pod to test consume secrets Jan 20 22:10:33.300: INFO: Waiting up to 5m0s for pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013" in namespace "secrets-1812" to be "success or failure" Jan 20 22:10:33.346: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013": Phase="Pending", Reason="", readiness=false. Elapsed: 45.440254ms Jan 20 22:10:35.368: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067588235s Jan 20 22:10:37.373: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073275301s Jan 20 22:10:39.382: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081720305s Jan 20 22:10:41.392: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09180056s Jan 20 22:10:43.444: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013": Phase="Pending", Reason="", readiness=false. Elapsed: 10.144278796s Jan 20 22:10:45.483: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013": Phase="Pending", Reason="", readiness=false. Elapsed: 12.183114363s Jan 20 22:10:47.498: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.198114177s STEP: Saw pod success Jan 20 22:10:47.498: INFO: Pod "pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013" satisfied condition "success or failure" Jan 20 22:10:47.505: INFO: Trying to get logs from node jerma-node pod pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013 container secret-volume-test: STEP: delete the pod Jan 20 22:10:47.595: INFO: Waiting for pod pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013 to disappear Jan 20 22:10:47.603: INFO: Pod pod-secrets-9a722eac-8989-4fe0-b967-7972a1214013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:10:47.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1812" for this suite. • [SLOW TEST:14.519 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:10:47.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 22:10:47.760: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 20 22:10:47.788: INFO: Number of nodes with available pods: 0 Jan 20 22:10:47.789: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:48.805: INFO: Number of nodes with available pods: 0 Jan 20 22:10:48.805: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:49.927: INFO: Number of nodes with available pods: 0 Jan 20 22:10:49.927: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:50.932: INFO: Number of nodes with available pods: 0 Jan 20 22:10:50.932: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:51.805: INFO: Number of nodes with available pods: 0 Jan 20 22:10:51.805: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:52.806: INFO: Number of nodes with available pods: 0 Jan 20 22:10:52.806: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:54.735: INFO: Number of nodes with available pods: 0 Jan 20 22:10:54.735: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:55.412: INFO: Number of nodes with available pods: 0 Jan 20 22:10:55.412: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:56.559: INFO: Number of nodes with available pods: 0 Jan 20 22:10:56.560: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:56.817: INFO: Number of nodes with available pods: 0 Jan 20 22:10:56.817: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:10:57.812: INFO: Number of nodes with available pods: 1 Jan 20 22:10:57.812: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:10:58.808: INFO: Number of nodes with available pods: 2 Jan 20 22:10:58.808: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 20 22:10:58.860: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:10:58.860: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:10:59.903: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:10:59.903: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:00.901: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:00.902: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:02.032: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:02.032: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:02.900: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:02.900: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:03.909: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:03.909: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:03.909: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:04.928: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:04.928: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:04.928: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:05.904: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:05.905: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:05.905: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:06.902: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:06.902: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:06.902: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:07.902: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:07.902: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:07.902: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:08.908: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:08.908: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:08.908: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:09.902: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:09.903: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:09.903: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:10.906: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:10.906: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:10.906: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:11.914: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:11.914: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:11.914: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:12.903: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:12.903: INFO: Wrong image for pod: daemon-set-nl8lp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:12.903: INFO: Pod daemon-set-nl8lp is not available Jan 20 22:11:14.343: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:14.343: INFO: Pod daemon-set-nh8xw is not available Jan 20 22:11:14.901: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:14.901: INFO: Pod daemon-set-nh8xw is not available Jan 20 22:11:15.913: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:15.913: INFO: Pod daemon-set-nh8xw is not available Jan 20 22:11:17.582: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:17.582: INFO: Pod daemon-set-nh8xw is not available Jan 20 22:11:17.902: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:17.902: INFO: Pod daemon-set-nh8xw is not available Jan 20 22:11:18.899: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:18.899: INFO: Pod daemon-set-nh8xw is not available Jan 20 22:11:19.903: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:19.903: INFO: Pod daemon-set-nh8xw is not available Jan 20 22:11:20.903: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:21.903: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:22.979: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:23.907: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:24.936: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:24.936: INFO: Pod daemon-set-k9kq9 is not available Jan 20 22:11:25.903: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:25.903: INFO: Pod daemon-set-k9kq9 is not available Jan 20 22:11:26.908: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:26.909: INFO: Pod daemon-set-k9kq9 is not available Jan 20 22:11:27.905: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:27.906: INFO: Pod daemon-set-k9kq9 is not available Jan 20 22:11:28.903: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:28.903: INFO: Pod daemon-set-k9kq9 is not available Jan 20 22:11:29.906: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:29.906: INFO: Pod daemon-set-k9kq9 is not available Jan 20 22:11:30.901: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:30.901: INFO: Pod daemon-set-k9kq9 is not available Jan 20 22:11:31.903: INFO: Wrong image for pod: daemon-set-k9kq9. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 20 22:11:31.904: INFO: Pod daemon-set-k9kq9 is not available Jan 20 22:11:32.906: INFO: Pod daemon-set-wntm4 is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 20 22:11:32.931: INFO: Number of nodes with available pods: 1 Jan 20 22:11:32.931: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:11:33.952: INFO: Number of nodes with available pods: 1 Jan 20 22:11:33.953: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:11:34.943: INFO: Number of nodes with available pods: 1 Jan 20 22:11:34.943: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:11:35.944: INFO: Number of nodes with available pods: 1 Jan 20 22:11:35.944: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:11:37.020: INFO: Number of nodes with available pods: 1 Jan 20 22:11:37.020: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:11:37.948: INFO: Number of nodes with available pods: 1 Jan 20 22:11:37.949: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:11:38.956: INFO: Number of nodes with available pods: 1 Jan 20 22:11:38.956: INFO: Node jerma-node is running more than one daemon pod Jan 20 22:11:39.990: INFO: Number of nodes with available pods: 2 Jan 20 22:11:39.990: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4721, will wait for the garbage collector to delete the pods Jan 20 22:11:40.074: INFO: Deleting DaemonSet.extensions daemon-set took: 9.4937ms Jan 20 22:11:40.375: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.588867ms Jan 20 22:11:47.112: INFO: Number of nodes with available pods: 0 Jan 20 22:11:47.112: INFO: Number of running nodes: 0, number of available pods: 0 Jan 20 22:11:47.115: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4721/daemonsets","resourceVersion":"3264187"},"items":null} Jan 20 22:11:47.118: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4721/pods","resourceVersion":"3264187"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:11:47.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4721" for this suite. • [SLOW TEST:59.518 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":192,"skipped":3320,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:11:47.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 22:11:48.123: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 22:11:50.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:11:52.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:11:54.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155108, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 22:11:57.215: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:11:57.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7460" for this suite. STEP: Destroying namespace "webhook-7460-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.505 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":193,"skipped":3333,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:11:57.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jan 20 22:11:57.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2929 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 20 22:12:10.217: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0120 22:12:09.131912 2304 log.go:172] (0xc0001191e0) (0xc0006cf900) Create stream\nI0120 22:12:09.132754 2304 log.go:172] (0xc0001191e0) (0xc0006cf900) Stream added, broadcasting: 1\nI0120 22:12:09.139116 2304 log.go:172] (0xc0001191e0) Reply frame received for 1\nI0120 22:12:09.139240 2304 log.go:172] (0xc0001191e0) (0xc000676000) Create stream\nI0120 22:12:09.139278 2304 log.go:172] (0xc0001191e0) (0xc000676000) Stream added, broadcasting: 3\nI0120 22:12:09.141534 2304 log.go:172] (0xc0001191e0) Reply frame received for 3\nI0120 22:12:09.141581 2304 log.go:172] (0xc0001191e0) (0xc0006cf9a0) Create stream\nI0120 22:12:09.141595 2304 log.go:172] (0xc0001191e0) (0xc0006cf9a0) Stream added, broadcasting: 5\nI0120 22:12:09.144369 2304 log.go:172] (0xc0001191e0) Reply frame received for 5\nI0120 22:12:09.144532 2304 log.go:172] (0xc0001191e0) (0xc0006760a0) Create stream\nI0120 22:12:09.144553 2304 log.go:172] (0xc0001191e0) (0xc0006760a0) Stream added, broadcasting: 7\nI0120 22:12:09.146993 2304 log.go:172] (0xc0001191e0) Reply frame received for 7\nI0120 22:12:09.147229 2304 log.go:172] (0xc000676000) (3) Writing data frame\nI0120 22:12:09.147464 2304 log.go:172] (0xc000676000) (3) Writing data frame\nI0120 22:12:09.149768 2304 log.go:172] (0xc0001191e0) Data frame received for 5\nI0120 22:12:09.149788 2304 log.go:172] (0xc0006cf9a0) (5) Data frame handling\nI0120 22:12:09.149811 2304 log.go:172] (0xc0006cf9a0) (5) Data frame sent\nI0120 22:12:09.151977 2304 log.go:172] (0xc0001191e0) Data frame received for 5\nI0120 22:12:09.151995 2304 log.go:172] (0xc0006cf9a0) (5) Data frame handling\nI0120 22:12:09.152008 2304 log.go:172] (0xc0006cf9a0) (5) Data frame sent\nI0120 22:12:10.149348 2304 log.go:172] (0xc0001191e0) Data frame received for 1\nI0120 22:12:10.149518 2304 log.go:172] (0xc0006cf900) (1) Data frame handling\nI0120 22:12:10.149564 2304 log.go:172] (0xc0006cf900) (1) Data frame sent\nI0120 22:12:10.149594 2304 log.go:172] (0xc0001191e0) (0xc0006cf900) Stream removed, broadcasting: 1\nI0120 22:12:10.151311 2304 log.go:172] (0xc0001191e0) (0xc000676000) Stream removed, broadcasting: 3\nI0120 22:12:10.151505 2304 log.go:172] (0xc0001191e0) (0xc0006cf9a0) Stream removed, broadcasting: 5\nI0120 22:12:10.151713 2304 log.go:172] (0xc0001191e0) (0xc0006760a0) Stream removed, broadcasting: 7\nI0120 22:12:10.151791 2304 log.go:172] (0xc0001191e0) (0xc0006cf900) Stream removed, broadcasting: 1\nI0120 22:12:10.151812 2304 log.go:172] (0xc0001191e0) (0xc000676000) Stream removed, broadcasting: 3\nI0120 22:12:10.151829 2304 log.go:172] (0xc0001191e0) (0xc0006cf9a0) Stream removed, broadcasting: 5\nI0120 22:12:10.151837 2304 log.go:172] (0xc0001191e0) (0xc0006760a0) Stream removed, broadcasting: 7\nI0120 22:12:10.152352 2304 log.go:172] (0xc0001191e0) Go away received\n" Jan 20 22:12:10.217: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:12:12.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2929" for this suite. • [SLOW TEST:14.600 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":194,"skipped":3334,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:12:12.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-884a27c0-9a44-4efb-9eea-eca61f72ba75 in namespace container-probe-4028 Jan 20 22:12:20.442: INFO: Started pod test-webserver-884a27c0-9a44-4efb-9eea-eca61f72ba75 in namespace container-probe-4028 STEP: checking the pod's current state and verifying that restartCount is present Jan 20 22:12:20.449: INFO: Initial restart count of pod test-webserver-884a27c0-9a44-4efb-9eea-eca61f72ba75 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:16:20.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4028" for this suite. • [SLOW TEST:248.411 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3341,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:16:20.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 20 22:16:31.379: INFO: Successfully updated pod "annotationupdatedc5526fd-1737-48ee-9d2a-90619e90c8f0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:16:33.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8353" for this suite. • [SLOW TEST:12.824 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:16:33.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 20 22:16:33.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6450' Jan 20 22:16:36.286: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 22:16:36.286: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Jan 20 22:16:36.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6450' Jan 20 22:16:36.582: INFO: stderr: "" Jan 20 22:16:36.582: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:16:36.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6450" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":197,"skipped":3369,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:16:36.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5956/secret-test-cd4837e8-5e45-4e19-bb87-cc345e9791dc STEP: Creating a pod to test consume secrets Jan 20 22:16:36.772: INFO: Waiting up to 5m0s for pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707" in namespace "secrets-5956" to be "success or failure" Jan 20 22:16:36.816: INFO: Pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707": Phase="Pending", Reason="", readiness=false. Elapsed: 44.62041ms Jan 20 22:16:38.897: INFO: Pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125132783s Jan 20 22:16:40.905: INFO: Pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133104929s Jan 20 22:16:42.915: INFO: Pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143172894s Jan 20 22:16:44.923: INFO: Pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15159391s Jan 20 22:16:46.933: INFO: Pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707": Phase="Pending", Reason="", readiness=false. Elapsed: 10.161182477s Jan 20 22:16:48.945: INFO: Pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.173445979s STEP: Saw pod success Jan 20 22:16:48.945: INFO: Pod "pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707" satisfied condition "success or failure" Jan 20 22:16:48.948: INFO: Trying to get logs from node jerma-node pod pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707 container env-test: STEP: delete the pod Jan 20 22:16:48.984: INFO: Waiting for pod pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707 to disappear Jan 20 22:16:48.992: INFO: Pod pod-configmaps-99e2e6bf-ed1e-4fef-ab60-88b37292c707 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:16:48.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5956" for this suite. • [SLOW TEST:12.375 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3416,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:16:49.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 22:16:49.170: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634" in namespace "projected-4626" to be "success or failure" Jan 20 22:16:49.274: INFO: Pod "downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634": Phase="Pending", Reason="", readiness=false. Elapsed: 103.260854ms Jan 20 22:16:51.283: INFO: Pod "downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112421646s Jan 20 22:16:53.293: INFO: Pod "downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121973672s Jan 20 22:16:55.303: INFO: Pod "downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132236026s Jan 20 22:16:57.313: INFO: Pod "downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142721605s Jan 20 22:16:59.321: INFO: Pod "downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150735268s STEP: Saw pod success Jan 20 22:16:59.322: INFO: Pod "downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634" satisfied condition "success or failure" Jan 20 22:16:59.326: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634 container client-container: STEP: delete the pod Jan 20 22:16:59.438: INFO: Waiting for pod downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634 to disappear Jan 20 22:16:59.447: INFO: Pod downwardapi-volume-34256cba-832b-4d85-be45-b6df5205f634 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:16:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4626" for this suite. • [SLOW TEST:10.466 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3424,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:16:59.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 22:16:59.675: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 20 22:16:59.697: INFO: Number of nodes with available pods: 0 Jan 20 22:16:59.697: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 20 22:16:59.768: INFO: Number of nodes with available pods: 0 Jan 20 22:16:59.768: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:01.011: INFO: Number of nodes with available pods: 0 Jan 20 22:17:01.011: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:01.777: INFO: Number of nodes with available pods: 0 Jan 20 22:17:01.778: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:02.778: INFO: Number of nodes with available pods: 0 Jan 20 22:17:02.778: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:04.534: INFO: Number of nodes with available pods: 0 Jan 20 22:17:04.535: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:04.837: INFO: Number of nodes with available pods: 0 Jan 20 22:17:04.837: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:05.787: INFO: Number of nodes with available pods: 0 Jan 20 22:17:05.787: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:06.812: INFO: Number of nodes with available pods: 1 Jan 20 22:17:06.812: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 20 22:17:06.852: INFO: Number of nodes with available pods: 1 Jan 20 22:17:06.852: INFO: Number of running nodes: 0, number of available pods: 1 Jan 20 22:17:07.861: INFO: Number of nodes with available pods: 0 Jan 20 22:17:07.861: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 20 22:17:07.887: INFO: Number of nodes with available pods: 0 Jan 20 22:17:07.887: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:08.898: INFO: Number of nodes with available pods: 0 Jan 20 22:17:08.899: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:09.896: INFO: Number of nodes with available pods: 0 Jan 20 22:17:09.896: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:10.898: INFO: Number of nodes with available pods: 0 Jan 20 22:17:10.898: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:11.896: INFO: Number of nodes with available pods: 0 Jan 20 22:17:11.896: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:12.901: INFO: Number of nodes with available pods: 0 Jan 20 22:17:12.901: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:13.930: INFO: Number of nodes with available pods: 0 Jan 20 22:17:13.930: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:14.895: INFO: Number of nodes with available pods: 0 Jan 20 22:17:14.895: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:15.896: INFO: Number of nodes with available pods: 0 Jan 20 22:17:15.896: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:16.898: INFO: Number of nodes with available pods: 0 Jan 20 22:17:16.899: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:17.896: INFO: Number of nodes with available pods: 0 Jan 20 22:17:17.896: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:18.901: INFO: Number of nodes with available pods: 0 Jan 20 22:17:18.901: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:19.896: INFO: Number of nodes with available pods: 0 Jan 20 22:17:19.896: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:20.897: INFO: Number of nodes with available pods: 0 Jan 20 22:17:20.897: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:21.898: INFO: Number of nodes with available pods: 0 Jan 20 22:17:21.898: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:22.896: INFO: Number of nodes with available pods: 0 Jan 20 22:17:22.897: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:23.895: INFO: Number of nodes with available pods: 0 Jan 20 22:17:23.896: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:25.224: INFO: Number of nodes with available pods: 0 Jan 20 22:17:25.224: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:25.897: INFO: Number of nodes with available pods: 0 Jan 20 22:17:25.897: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:26.923: INFO: Number of nodes with available pods: 0 Jan 20 22:17:26.924: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:28.495: INFO: Number of nodes with available pods: 0 Jan 20 22:17:28.495: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:28.903: INFO: Number of nodes with available pods: 0 Jan 20 22:17:28.903: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:29.902: INFO: Number of nodes with available pods: 0 Jan 20 22:17:29.902: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:30.934: INFO: Number of nodes with available pods: 0 Jan 20 22:17:30.934: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 20 22:17:31.899: INFO: Number of nodes with available pods: 1 Jan 20 22:17:31.900: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4151, will wait for the garbage collector to delete the pods Jan 20 22:17:31.978: INFO: Deleting DaemonSet.extensions daemon-set took: 14.16044ms Jan 20 22:17:32.279: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.711611ms Jan 20 22:17:37.686: INFO: Number of nodes with available pods: 0 Jan 20 22:17:37.686: INFO: Number of running nodes: 0, number of available pods: 0 Jan 20 22:17:37.689: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4151/daemonsets","resourceVersion":"3265294"},"items":null} Jan 20 22:17:37.691: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4151/pods","resourceVersion":"3265294"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:17:37.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4151" for this suite. • [SLOW TEST:38.295 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":200,"skipped":3441,"failed":0} SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:17:37.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 22:17:37.828: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1288 I0120 22:17:37.856989 9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1288, replica count: 1 I0120 22:17:38.908780 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:17:39.909902 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:17:40.910695 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:17:41.911533 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:17:42.912309 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:17:43.913391 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 22:17:44.914580 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 20 22:17:45.074: INFO: Created: latency-svc-f8z8q Jan 20 22:17:45.084: INFO: Got endpoints: latency-svc-f8z8q [69.025344ms] Jan 20 22:17:45.238: INFO: Created: latency-svc-6z4nd Jan 20 22:17:45.248: INFO: Got endpoints: latency-svc-6z4nd [163.214473ms] Jan 20 22:17:45.279: INFO: Created: latency-svc-d29kx Jan 20 22:17:45.284: INFO: Got endpoints: latency-svc-d29kx [199.711575ms] Jan 20 22:17:45.313: INFO: Created: latency-svc-xzllr Jan 20 22:17:45.318: INFO: Got endpoints: latency-svc-xzllr [233.256291ms] Jan 20 22:17:45.400: INFO: Created: latency-svc-2lr9h Jan 20 22:17:45.418: INFO: Got endpoints: latency-svc-2lr9h [330.015469ms] Jan 20 22:17:45.476: INFO: Created: latency-svc-8k8j5 Jan 20 22:17:45.481: INFO: Got endpoints: latency-svc-8k8j5 [393.093766ms] Jan 20 22:17:45.554: INFO: Created: latency-svc-mcrc5 Jan 20 22:17:45.562: INFO: Got endpoints: latency-svc-mcrc5 [476.182552ms] Jan 20 22:17:45.586: INFO: Created: latency-svc-hsn4q Jan 20 22:17:45.596: INFO: Got endpoints: latency-svc-hsn4q [510.973652ms] Jan 20 22:17:45.618: INFO: Created: latency-svc-8h8dd Jan 20 22:17:45.633: INFO: Got endpoints: latency-svc-8h8dd [545.850256ms] Jan 20 22:17:45.759: INFO: Created: latency-svc-xj9k6 Jan 20 22:17:45.774: INFO: Got endpoints: latency-svc-xj9k6 [688.097246ms] Jan 20 22:17:45.818: INFO: Created: latency-svc-h6nd4 Jan 20 22:17:45.837: INFO: Got endpoints: latency-svc-h6nd4 [746.250026ms] Jan 20 22:17:45.947: INFO: Created: latency-svc-pbszr Jan 20 22:17:45.952: INFO: Got endpoints: latency-svc-pbszr [867.56707ms] Jan 20 22:17:46.023: INFO: Created: latency-svc-94v95 Jan 20 22:17:46.039: INFO: Got endpoints: latency-svc-94v95 [948.173056ms] Jan 20 22:17:46.090: INFO: Created: latency-svc-9s5fx Jan 20 22:17:46.097: INFO: Got endpoints: latency-svc-9s5fx [1.003077293s] Jan 20 22:17:46.137: INFO: Created: latency-svc-kvkdz Jan 20 22:17:46.158: INFO: Got endpoints: latency-svc-kvkdz [1.070815086s] Jan 20 22:17:46.242: INFO: Created: latency-svc-j87hw Jan 20 22:17:46.249: INFO: Got endpoints: latency-svc-j87hw [1.161235715s] Jan 20 22:17:46.315: INFO: Created: latency-svc-q4gmz Jan 20 22:17:46.320: INFO: Got endpoints: latency-svc-q4gmz [1.072523801s] Jan 20 22:17:46.381: INFO: Created: latency-svc-45c9d Jan 20 22:17:46.390: INFO: Got endpoints: latency-svc-45c9d [1.105430621s] Jan 20 22:17:46.441: INFO: Created: latency-svc-wtqjg Jan 20 22:17:46.446: INFO: Got endpoints: latency-svc-wtqjg [1.127497588s] Jan 20 22:17:46.477: INFO: Created: latency-svc-mn8zr Jan 20 22:17:46.522: INFO: Got endpoints: latency-svc-mn8zr [1.103797545s] Jan 20 22:17:46.543: INFO: Created: latency-svc-dkhsf Jan 20 22:17:46.551: INFO: Got endpoints: latency-svc-dkhsf [1.070083609s] Jan 20 22:17:46.572: INFO: Created: latency-svc-7gbvd Jan 20 22:17:46.665: INFO: Got endpoints: latency-svc-7gbvd [1.102807941s] Jan 20 22:17:46.671: INFO: Created: latency-svc-vbh87 Jan 20 22:17:46.677: INFO: Got endpoints: latency-svc-vbh87 [1.080344101s] Jan 20 22:17:46.721: INFO: Created: latency-svc-44wzk Jan 20 22:17:46.797: INFO: Got endpoints: latency-svc-44wzk [1.164640681s] Jan 20 22:17:46.823: INFO: Created: latency-svc-5w7tr Jan 20 22:17:46.830: INFO: Got endpoints: latency-svc-5w7tr [1.056493717s] Jan 20 22:17:46.854: INFO: Created: latency-svc-8hxgp Jan 20 22:17:46.866: INFO: Got endpoints: latency-svc-8hxgp [1.028848929s] Jan 20 22:17:46.877: INFO: Created: latency-svc-m7ckx Jan 20 22:17:46.884: INFO: Got endpoints: latency-svc-m7ckx [931.57558ms] Jan 20 22:17:46.964: INFO: Created: latency-svc-n72m8 Jan 20 22:17:46.968: INFO: Got endpoints: latency-svc-n72m8 [928.087778ms] Jan 20 22:17:47.035: INFO: Created: latency-svc-gpx2c Jan 20 22:17:47.042: INFO: Got endpoints: latency-svc-gpx2c [157.964711ms] Jan 20 22:17:47.119: INFO: Created: latency-svc-vtfrx Jan 20 22:17:47.145: INFO: Got endpoints: latency-svc-vtfrx [1.047573171s] Jan 20 22:17:47.149: INFO: Created: latency-svc-gwq9c Jan 20 22:17:47.167: INFO: Got endpoints: latency-svc-gwq9c [1.008483268s] Jan 20 22:17:47.209: INFO: Created: latency-svc-rdc27 Jan 20 22:17:47.212: INFO: Got endpoints: latency-svc-rdc27 [962.851993ms] Jan 20 22:17:47.285: INFO: Created: latency-svc-lbj2g Jan 20 22:17:47.296: INFO: Got endpoints: latency-svc-lbj2g [975.443766ms] Jan 20 22:17:47.332: INFO: Created: latency-svc-cz2hw Jan 20 22:17:47.345: INFO: Got endpoints: latency-svc-cz2hw [955.097061ms] Jan 20 22:17:47.398: INFO: Created: latency-svc-vn9xj Jan 20 22:17:47.404: INFO: Got endpoints: latency-svc-vn9xj [957.939879ms] Jan 20 22:17:47.630: INFO: Created: latency-svc-msxj2 Jan 20 22:17:47.764: INFO: Got endpoints: latency-svc-msxj2 [1.24206683s] Jan 20 22:17:47.791: INFO: Created: latency-svc-g59w8 Jan 20 22:17:47.815: INFO: Got endpoints: latency-svc-g59w8 [1.263541431s] Jan 20 22:17:47.837: INFO: Created: latency-svc-j6ck8 Jan 20 22:17:47.845: INFO: Got endpoints: latency-svc-j6ck8 [1.179090417s] Jan 20 22:17:47.912: INFO: Created: latency-svc-zfg2t Jan 20 22:17:47.917: INFO: Got endpoints: latency-svc-zfg2t [1.239610621s] Jan 20 22:17:47.996: INFO: Created: latency-svc-drb9x Jan 20 22:17:48.001: INFO: Got endpoints: latency-svc-drb9x [1.20340245s] Jan 20 22:17:48.069: INFO: Created: latency-svc-pvl2l Jan 20 22:17:48.075: INFO: Got endpoints: latency-svc-pvl2l [1.244607177s] Jan 20 22:17:48.102: INFO: Created: latency-svc-29hcz Jan 20 22:17:48.106: INFO: Got endpoints: latency-svc-29hcz [1.24014774s] Jan 20 22:17:48.278: INFO: Created: latency-svc-5wmmv Jan 20 22:17:48.311: INFO: Created: latency-svc-lx4k5 Jan 20 22:17:48.312: INFO: Got endpoints: latency-svc-5wmmv [1.344681335s] Jan 20 22:17:48.324: INFO: Got endpoints: latency-svc-lx4k5 [1.281994107s] Jan 20 22:17:48.346: INFO: Created: latency-svc-v9jr2 Jan 20 22:17:48.358: INFO: Got endpoints: latency-svc-v9jr2 [1.213253857s] Jan 20 22:17:48.449: INFO: Created: latency-svc-d8rks Jan 20 22:17:48.472: INFO: Got endpoints: latency-svc-d8rks [1.304722647s] Jan 20 22:17:48.496: INFO: Created: latency-svc-wcnbv Jan 20 22:17:48.510: INFO: Got endpoints: latency-svc-wcnbv [1.297908039s] Jan 20 22:17:48.610: INFO: Created: latency-svc-56wpn Jan 20 22:17:48.629: INFO: Got endpoints: latency-svc-56wpn [1.33317371s] Jan 20 22:17:48.631: INFO: Created: latency-svc-gtb8q Jan 20 22:17:48.641: INFO: Got endpoints: latency-svc-gtb8q [1.295980235s] Jan 20 22:17:48.709: INFO: Created: latency-svc-2prj2 Jan 20 22:17:48.777: INFO: Got endpoints: latency-svc-2prj2 [1.373421456s] Jan 20 22:17:48.791: INFO: Created: latency-svc-8b7sh Jan 20 22:17:48.808: INFO: Got endpoints: latency-svc-8b7sh [1.043909454s] Jan 20 22:17:48.841: INFO: Created: latency-svc-gclnr Jan 20 22:17:48.844: INFO: Got endpoints: latency-svc-gclnr [1.028352519s] Jan 20 22:17:48.877: INFO: Created: latency-svc-bghmn Jan 20 22:17:48.959: INFO: Got endpoints: latency-svc-bghmn [1.112954783s] Jan 20 22:17:48.964: INFO: Created: latency-svc-7zbwl Jan 20 22:17:48.978: INFO: Got endpoints: latency-svc-7zbwl [1.061297293s] Jan 20 22:17:49.007: INFO: Created: latency-svc-d7xc2 Jan 20 22:17:49.012: INFO: Got endpoints: latency-svc-d7xc2 [1.010446506s] Jan 20 22:17:49.051: INFO: Created: latency-svc-4vpvq Jan 20 22:17:49.198: INFO: Created: latency-svc-6rfcc Jan 20 22:17:49.199: INFO: Got endpoints: latency-svc-4vpvq [1.123965787s] Jan 20 22:17:49.206: INFO: Got endpoints: latency-svc-6rfcc [1.099627478s] Jan 20 22:17:49.227: INFO: Created: latency-svc-4n7nz Jan 20 22:17:49.255: INFO: Got endpoints: latency-svc-4n7nz [942.697342ms] Jan 20 22:17:49.293: INFO: Created: latency-svc-4kthx Jan 20 22:17:49.380: INFO: Got endpoints: latency-svc-4kthx [1.055994679s] Jan 20 22:17:49.386: INFO: Created: latency-svc-kdd5z Jan 20 22:17:49.393: INFO: Got endpoints: latency-svc-kdd5z [1.034665499s] Jan 20 22:17:49.420: INFO: Created: latency-svc-m4qk8 Jan 20 22:17:49.428: INFO: Got endpoints: latency-svc-m4qk8 [955.313777ms] Jan 20 22:17:49.473: INFO: Created: latency-svc-226zm Jan 20 22:17:49.596: INFO: Got endpoints: latency-svc-226zm [1.086233046s] Jan 20 22:17:49.636: INFO: Created: latency-svc-q26rb Jan 20 22:17:49.646: INFO: Got endpoints: latency-svc-q26rb [1.016529286s] Jan 20 22:17:49.792: INFO: Created: latency-svc-xvcd5 Jan 20 22:17:49.797: INFO: Got endpoints: latency-svc-xvcd5 [1.155619136s] Jan 20 22:17:49.840: INFO: Created: latency-svc-9rsm6 Jan 20 22:17:49.856: INFO: Got endpoints: latency-svc-9rsm6 [1.078624528s] Jan 20 22:17:49.892: INFO: Created: latency-svc-sqprd Jan 20 22:17:49.970: INFO: Got endpoints: latency-svc-sqprd [1.162007525s] Jan 20 22:17:50.021: INFO: Created: latency-svc-dm76s Jan 20 22:17:50.041: INFO: Got endpoints: latency-svc-dm76s [1.197090986s] Jan 20 22:17:50.058: INFO: Created: latency-svc-dnbvv Jan 20 22:17:50.144: INFO: Created: latency-svc-mdgzx Jan 20 22:17:50.145: INFO: Got endpoints: latency-svc-dnbvv [1.186075924s] Jan 20 22:17:50.149: INFO: Got endpoints: latency-svc-mdgzx [1.170637414s] Jan 20 22:17:50.237: INFO: Created: latency-svc-rbbkj Jan 20 22:17:50.334: INFO: Got endpoints: latency-svc-rbbkj [1.32238465s] Jan 20 22:17:50.382: INFO: Created: latency-svc-sr79m Jan 20 22:17:50.390: INFO: Got endpoints: latency-svc-sr79m [1.190780566s] Jan 20 22:17:50.413: INFO: Created: latency-svc-lqdk5 Jan 20 22:17:50.431: INFO: Got endpoints: latency-svc-lqdk5 [1.22434627s] Jan 20 22:17:50.433: INFO: Created: latency-svc-qd26b Jan 20 22:17:50.499: INFO: Got endpoints: latency-svc-qd26b [1.243299447s] Jan 20 22:17:50.525: INFO: Created: latency-svc-rjkmq Jan 20 22:17:50.560: INFO: Got endpoints: latency-svc-rjkmq [1.179829602s] Jan 20 22:17:50.561: INFO: Created: latency-svc-9w97d Jan 20 22:17:50.572: INFO: Got endpoints: latency-svc-9w97d [1.178567315s] Jan 20 22:17:50.651: INFO: Created: latency-svc-zslzl Jan 20 22:17:50.652: INFO: Got endpoints: latency-svc-zslzl [1.223467042s] Jan 20 22:17:50.679: INFO: Created: latency-svc-cpklq Jan 20 22:17:50.686: INFO: Got endpoints: latency-svc-cpklq [1.08916136s] Jan 20 22:17:50.838: INFO: Created: latency-svc-kvnt2 Jan 20 22:17:50.838: INFO: Got endpoints: latency-svc-kvnt2 [1.192705281s] Jan 20 22:17:50.894: INFO: Created: latency-svc-ttfdd Jan 20 22:17:50.906: INFO: Got endpoints: latency-svc-ttfdd [1.108620137s] Jan 20 22:17:50.933: INFO: Created: latency-svc-hv6tf Jan 20 22:17:50.998: INFO: Created: latency-svc-zzsr4 Jan 20 22:17:50.998: INFO: Got endpoints: latency-svc-hv6tf [1.141962555s] Jan 20 22:17:51.018: INFO: Got endpoints: latency-svc-zzsr4 [1.047357614s] Jan 20 22:17:51.064: INFO: Created: latency-svc-drff7 Jan 20 22:17:51.072: INFO: Got endpoints: latency-svc-drff7 [1.031208706s] Jan 20 22:17:51.206: INFO: Created: latency-svc-cz42b Jan 20 22:17:51.206: INFO: Got endpoints: latency-svc-cz42b [1.061280754s] Jan 20 22:17:51.275: INFO: Created: latency-svc-kjg7t Jan 20 22:17:51.450: INFO: Got endpoints: latency-svc-kjg7t [1.30123787s] Jan 20 22:17:51.472: INFO: Created: latency-svc-g9q79 Jan 20 22:17:51.484: INFO: Got endpoints: latency-svc-g9q79 [1.149897179s] Jan 20 22:17:51.659: INFO: Created: latency-svc-krpjn Jan 20 22:17:51.664: INFO: Got endpoints: latency-svc-krpjn [1.273291014s] Jan 20 22:17:51.722: INFO: Created: latency-svc-cmhm5 Jan 20 22:17:51.808: INFO: Got endpoints: latency-svc-cmhm5 [1.376976082s] Jan 20 22:17:51.918: INFO: Created: latency-svc-qsgd2 Jan 20 22:17:51.927: INFO: Got endpoints: latency-svc-qsgd2 [1.427877168s] Jan 20 22:17:51.946: INFO: Created: latency-svc-n7ppt Jan 20 22:17:51.959: INFO: Got endpoints: latency-svc-n7ppt [1.39833131s] Jan 20 22:17:51.963: INFO: Created: latency-svc-n86q7 Jan 20 22:17:51.963: INFO: Got endpoints: latency-svc-n86q7 [1.391140193s] Jan 20 22:17:52.084: INFO: Created: latency-svc-qwwph Jan 20 22:17:52.101: INFO: Got endpoints: latency-svc-qwwph [1.449403936s] Jan 20 22:17:52.130: INFO: Created: latency-svc-w4t44 Jan 20 22:17:52.141: INFO: Got endpoints: latency-svc-w4t44 [1.455002079s] Jan 20 22:17:52.168: INFO: Created: latency-svc-dj5ll Jan 20 22:17:52.222: INFO: Created: latency-svc-575cc Jan 20 22:17:52.223: INFO: Got endpoints: latency-svc-dj5ll [1.384527816s] Jan 20 22:17:52.239: INFO: Got endpoints: latency-svc-575cc [1.33283306s] Jan 20 22:17:52.273: INFO: Created: latency-svc-8tbpl Jan 20 22:17:52.277: INFO: Got endpoints: latency-svc-8tbpl [1.27879557s] Jan 20 22:17:52.316: INFO: Created: latency-svc-nsb7p Jan 20 22:17:52.382: INFO: Got endpoints: latency-svc-nsb7p [1.363378343s] Jan 20 22:17:52.385: INFO: Created: latency-svc-gxgcg Jan 20 22:17:52.416: INFO: Got endpoints: latency-svc-gxgcg [1.343142147s] Jan 20 22:17:52.529: INFO: Created: latency-svc-gwd28 Jan 20 22:17:52.554: INFO: Got endpoints: latency-svc-gwd28 [1.347582718s] Jan 20 22:17:52.556: INFO: Created: latency-svc-sdgws Jan 20 22:17:52.563: INFO: Got endpoints: latency-svc-sdgws [1.112015542s] Jan 20 22:17:52.606: INFO: Created: latency-svc-7d858 Jan 20 22:17:52.624: INFO: Got endpoints: latency-svc-7d858 [1.139425575s] Jan 20 22:17:52.687: INFO: Created: latency-svc-9lmbt Jan 20 22:17:52.687: INFO: Got endpoints: latency-svc-9lmbt [1.023238318s] Jan 20 22:17:52.708: INFO: Created: latency-svc-6zgj5 Jan 20 22:17:52.718: INFO: Got endpoints: latency-svc-6zgj5 [910.253556ms] Jan 20 22:17:52.804: INFO: Created: latency-svc-b8mpm Jan 20 22:17:52.821: INFO: Got endpoints: latency-svc-b8mpm [894.301383ms] Jan 20 22:17:52.826: INFO: Created: latency-svc-fsqqs Jan 20 22:17:52.829: INFO: Got endpoints: latency-svc-fsqqs [869.087522ms] Jan 20 22:17:52.895: INFO: Created: latency-svc-99d5s Jan 20 22:17:52.944: INFO: Got endpoints: latency-svc-99d5s [980.431761ms] Jan 20 22:17:52.958: INFO: Created: latency-svc-w5wnq Jan 20 22:17:52.960: INFO: Got endpoints: latency-svc-w5wnq [858.767492ms] Jan 20 22:17:52.982: INFO: Created: latency-svc-zrqhv Jan 20 22:17:52.986: INFO: Got endpoints: latency-svc-zrqhv [844.685761ms] Jan 20 22:17:53.001: INFO: Created: latency-svc-tfhvh Jan 20 22:17:53.025: INFO: Got endpoints: latency-svc-tfhvh [802.149493ms] Jan 20 22:17:53.113: INFO: Created: latency-svc-rvrtd Jan 20 22:17:53.113: INFO: Got endpoints: latency-svc-rvrtd [874.318119ms] Jan 20 22:17:53.150: INFO: Created: latency-svc-9wmnq Jan 20 22:17:53.177: INFO: Got endpoints: latency-svc-9wmnq [899.886425ms] Jan 20 22:17:53.294: INFO: Created: latency-svc-6w5gn Jan 20 22:17:53.306: INFO: Got endpoints: latency-svc-6w5gn [924.148968ms] Jan 20 22:17:53.393: INFO: Created: latency-svc-8h7jc Jan 20 22:17:53.445: INFO: Got endpoints: latency-svc-8h7jc [1.029018933s] Jan 20 22:17:53.451: INFO: Created: latency-svc-qqmmn Jan 20 22:17:53.474: INFO: Got endpoints: latency-svc-qqmmn [919.297594ms] Jan 20 22:17:53.500: INFO: Created: latency-svc-ftps5 Jan 20 22:17:53.538: INFO: Got endpoints: latency-svc-ftps5 [974.544792ms] Jan 20 22:17:53.540: INFO: Created: latency-svc-vs6pw Jan 20 22:17:53.603: INFO: Got endpoints: latency-svc-vs6pw [978.256109ms] Jan 20 22:17:53.611: INFO: Created: latency-svc-pctb8 Jan 20 22:17:53.611: INFO: Got endpoints: latency-svc-pctb8 [923.926097ms] Jan 20 22:17:53.660: INFO: Created: latency-svc-twwn8 Jan 20 22:17:53.671: INFO: Got endpoints: latency-svc-twwn8 [952.013948ms] Jan 20 22:17:53.790: INFO: Created: latency-svc-9bk5v Jan 20 22:17:53.791: INFO: Got endpoints: latency-svc-9bk5v [969.512088ms] Jan 20 22:17:53.881: INFO: Created: latency-svc-g9jxq Jan 20 22:17:53.941: INFO: Got endpoints: latency-svc-g9jxq [1.111950679s] Jan 20 22:17:53.947: INFO: Created: latency-svc-7mmsf Jan 20 22:17:53.957: INFO: Got endpoints: latency-svc-7mmsf [1.013120352s] Jan 20 22:17:53.983: INFO: Created: latency-svc-nz7h2 Jan 20 22:17:53.988: INFO: Got endpoints: latency-svc-nz7h2 [1.027714804s] Jan 20 22:17:54.009: INFO: Created: latency-svc-5xvtq Jan 20 22:17:54.028: INFO: Got endpoints: latency-svc-5xvtq [1.042467044s] Jan 20 22:17:54.031: INFO: Created: latency-svc-htfdf Jan 20 22:17:54.090: INFO: Got endpoints: latency-svc-htfdf [1.064488794s] Jan 20 22:17:54.095: INFO: Created: latency-svc-kj2gj Jan 20 22:17:54.104: INFO: Got endpoints: latency-svc-kj2gj [991.332858ms] Jan 20 22:17:54.130: INFO: Created: latency-svc-dskf4 Jan 20 22:17:54.141: INFO: Got endpoints: latency-svc-dskf4 [963.972417ms] Jan 20 22:17:54.256: INFO: Created: latency-svc-45w9q Jan 20 22:17:54.279: INFO: Got endpoints: latency-svc-45w9q [972.276496ms] Jan 20 22:17:54.282: INFO: Created: latency-svc-6qlzb Jan 20 22:17:54.312: INFO: Got endpoints: latency-svc-6qlzb [866.757694ms] Jan 20 22:17:54.401: INFO: Created: latency-svc-jtp2n Jan 20 22:17:54.426: INFO: Got endpoints: latency-svc-jtp2n [951.664136ms] Jan 20 22:17:54.429: INFO: Created: latency-svc-jlf7m Jan 20 22:17:54.437: INFO: Got endpoints: latency-svc-jlf7m [898.209494ms] Jan 20 22:17:54.455: INFO: Created: latency-svc-mlxrt Jan 20 22:17:54.469: INFO: Got endpoints: latency-svc-mlxrt [865.591153ms] Jan 20 22:17:54.495: INFO: Created: latency-svc-x5dqd Jan 20 22:17:54.571: INFO: Got endpoints: latency-svc-x5dqd [959.89927ms] Jan 20 22:17:54.598: INFO: Created: latency-svc-2kg27 Jan 20 22:17:54.609: INFO: Got endpoints: latency-svc-2kg27 [937.907607ms] Jan 20 22:17:54.728: INFO: Created: latency-svc-flrkq Jan 20 22:17:54.755: INFO: Got endpoints: latency-svc-flrkq [964.379955ms] Jan 20 22:17:54.759: INFO: Created: latency-svc-7svnf Jan 20 22:17:54.762: INFO: Got endpoints: latency-svc-7svnf [821.17993ms] Jan 20 22:17:54.795: INFO: Created: latency-svc-xxm85 Jan 20 22:17:54.804: INFO: Got endpoints: latency-svc-xxm85 [846.631862ms] Jan 20 22:17:54.902: INFO: Created: latency-svc-hc88g Jan 20 22:17:54.916: INFO: Got endpoints: latency-svc-hc88g [927.879139ms] Jan 20 22:17:54.939: INFO: Created: latency-svc-2rp2x Jan 20 22:17:54.951: INFO: Got endpoints: latency-svc-2rp2x [922.095484ms] Jan 20 22:17:54.987: INFO: Created: latency-svc-hzr9b Jan 20 22:17:54.987: INFO: Got endpoints: latency-svc-hzr9b [896.867351ms] Jan 20 22:17:55.122: INFO: Created: latency-svc-zvcgv Jan 20 22:17:55.133: INFO: Got endpoints: latency-svc-zvcgv [1.028503467s] Jan 20 22:17:55.250: INFO: Created: latency-svc-v9jgn Jan 20 22:17:55.257: INFO: Got endpoints: latency-svc-v9jgn [1.115567865s] Jan 20 22:17:55.277: INFO: Created: latency-svc-jv6xr Jan 20 22:17:55.290: INFO: Got endpoints: latency-svc-jv6xr [1.011468461s] Jan 20 22:17:55.320: INFO: Created: latency-svc-pvw5h Jan 20 22:17:55.324: INFO: Got endpoints: latency-svc-pvw5h [1.011915154s] Jan 20 22:17:55.393: INFO: Created: latency-svc-jrzkm Jan 20 22:17:55.398: INFO: Got endpoints: latency-svc-jrzkm [971.246313ms] Jan 20 22:17:55.436: INFO: Created: latency-svc-lmkjq Jan 20 22:17:55.457: INFO: Got endpoints: latency-svc-lmkjq [1.020203669s] Jan 20 22:17:55.484: INFO: Created: latency-svc-2qgs2 Jan 20 22:17:55.493: INFO: Got endpoints: latency-svc-2qgs2 [1.023576465s] Jan 20 22:17:55.595: INFO: Created: latency-svc-2v6pt Jan 20 22:17:55.602: INFO: Got endpoints: latency-svc-2v6pt [1.03045763s] Jan 20 22:17:55.647: INFO: Created: latency-svc-h9xz8 Jan 20 22:17:55.670: INFO: Got endpoints: latency-svc-h9xz8 [1.060447626s] Jan 20 22:17:55.751: INFO: Created: latency-svc-drmk5 Jan 20 22:17:55.751: INFO: Got endpoints: latency-svc-drmk5 [995.727705ms] Jan 20 22:17:55.798: INFO: Created: latency-svc-zlxbp Jan 20 22:17:55.800: INFO: Got endpoints: latency-svc-zlxbp [1.03804126s] Jan 20 22:17:55.826: INFO: Created: latency-svc-tftqj Jan 20 22:17:55.894: INFO: Got endpoints: latency-svc-tftqj [1.089966838s] Jan 20 22:17:55.896: INFO: Created: latency-svc-llwbz Jan 20 22:17:55.900: INFO: Got endpoints: latency-svc-llwbz [983.74557ms] Jan 20 22:17:55.920: INFO: Created: latency-svc-sllxv Jan 20 22:17:56.031: INFO: Got endpoints: latency-svc-sllxv [1.080558136s] Jan 20 22:17:56.045: INFO: Created: latency-svc-vhhsk Jan 20 22:17:56.047: INFO: Got endpoints: latency-svc-vhhsk [1.059397913s] Jan 20 22:17:56.070: INFO: Created: latency-svc-lv4x6 Jan 20 22:17:56.072: INFO: Got endpoints: latency-svc-lv4x6 [939.02283ms] Jan 20 22:17:56.097: INFO: Created: latency-svc-w55g2 Jan 20 22:17:56.103: INFO: Got endpoints: latency-svc-w55g2 [846.566093ms] Jan 20 22:17:56.173: INFO: Created: latency-svc-kfj9r Jan 20 22:17:56.215: INFO: Got endpoints: latency-svc-kfj9r [925.084341ms] Jan 20 22:17:56.221: INFO: Created: latency-svc-q2t2c Jan 20 22:17:56.247: INFO: Got endpoints: latency-svc-q2t2c [923.403316ms] Jan 20 22:17:56.314: INFO: Created: latency-svc-npptt Jan 20 22:17:56.342: INFO: Created: latency-svc-phk6z Jan 20 22:17:56.342: INFO: Got endpoints: latency-svc-npptt [944.38385ms] Jan 20 22:17:56.348: INFO: Got endpoints: latency-svc-phk6z [890.534218ms] Jan 20 22:17:56.517: INFO: Created: latency-svc-vqxj5 Jan 20 22:17:56.526: INFO: Got endpoints: latency-svc-vqxj5 [1.033004842s] Jan 20 22:17:56.569: INFO: Created: latency-svc-sskq2 Jan 20 22:17:56.572: INFO: Got endpoints: latency-svc-sskq2 [970.259903ms] Jan 20 22:17:56.709: INFO: Created: latency-svc-w5hv4 Jan 20 22:17:56.715: INFO: Got endpoints: latency-svc-w5hv4 [1.045774216s] Jan 20 22:17:56.804: INFO: Created: latency-svc-pt2cr Jan 20 22:17:56.924: INFO: Got endpoints: latency-svc-pt2cr [1.172646189s] Jan 20 22:17:56.930: INFO: Created: latency-svc-rfd9q Jan 20 22:17:56.938: INFO: Got endpoints: latency-svc-rfd9q [1.137744602s] Jan 20 22:17:57.019: INFO: Created: latency-svc-c9hpz Jan 20 22:17:57.155: INFO: Got endpoints: latency-svc-c9hpz [1.260916483s] Jan 20 22:17:57.167: INFO: Created: latency-svc-9f2fk Jan 20 22:17:57.191: INFO: Got endpoints: latency-svc-9f2fk [1.291305763s] Jan 20 22:17:57.354: INFO: Created: latency-svc-86qlj Jan 20 22:17:57.371: INFO: Got endpoints: latency-svc-86qlj [1.339533526s] Jan 20 22:17:57.373: INFO: Created: latency-svc-mk89m Jan 20 22:17:57.378: INFO: Got endpoints: latency-svc-mk89m [1.331171227s] Jan 20 22:17:57.410: INFO: Created: latency-svc-55b78 Jan 20 22:17:57.429: INFO: Got endpoints: latency-svc-55b78 [1.356515569s] Jan 20 22:17:57.432: INFO: Created: latency-svc-swxt6 Jan 20 22:17:57.495: INFO: Got endpoints: latency-svc-swxt6 [1.391711732s] Jan 20 22:17:57.510: INFO: Created: latency-svc-t74mr Jan 20 22:17:57.517: INFO: Got endpoints: latency-svc-t74mr [1.301337978s] Jan 20 22:17:57.543: INFO: Created: latency-svc-fjj66 Jan 20 22:17:57.547: INFO: Got endpoints: latency-svc-fjj66 [1.298575203s] Jan 20 22:17:57.663: INFO: Created: latency-svc-zjtjb Jan 20 22:17:57.671: INFO: Got endpoints: latency-svc-zjtjb [1.328692918s] Jan 20 22:17:57.693: INFO: Created: latency-svc-9psx8 Jan 20 22:17:57.706: INFO: Got endpoints: latency-svc-9psx8 [1.358383014s] Jan 20 22:17:57.728: INFO: Created: latency-svc-p7tv5 Jan 20 22:17:57.743: INFO: Got endpoints: latency-svc-p7tv5 [1.216467616s] Jan 20 22:17:57.844: INFO: Created: latency-svc-q4qn6 Jan 20 22:17:57.857: INFO: Got endpoints: latency-svc-q4qn6 [1.284825363s] Jan 20 22:17:57.908: INFO: Created: latency-svc-kdnl7 Jan 20 22:17:57.924: INFO: Got endpoints: latency-svc-kdnl7 [1.208846191s] Jan 20 22:17:58.023: INFO: Created: latency-svc-lpt4k Jan 20 22:17:58.024: INFO: Got endpoints: latency-svc-lpt4k [1.099398497s] Jan 20 22:17:58.065: INFO: Created: latency-svc-pvtjf Jan 20 22:17:58.066: INFO: Got endpoints: latency-svc-pvtjf [1.127818s] Jan 20 22:17:58.170: INFO: Created: latency-svc-cjb72 Jan 20 22:17:58.181: INFO: Got endpoints: latency-svc-cjb72 [1.025420612s] Jan 20 22:17:58.234: INFO: Created: latency-svc-dhp55 Jan 20 22:17:58.238: INFO: Got endpoints: latency-svc-dhp55 [1.046553466s] Jan 20 22:17:58.256: INFO: Created: latency-svc-qnfl6 Jan 20 22:17:58.326: INFO: Got endpoints: latency-svc-qnfl6 [954.701749ms] Jan 20 22:17:58.358: INFO: Created: latency-svc-flnx6 Jan 20 22:17:58.380: INFO: Got endpoints: latency-svc-flnx6 [1.001631254s] Jan 20 22:17:58.409: INFO: Created: latency-svc-gcnlp Jan 20 22:17:58.477: INFO: Got endpoints: latency-svc-gcnlp [1.047445086s] Jan 20 22:17:58.487: INFO: Created: latency-svc-h5rx8 Jan 20 22:17:58.501: INFO: Got endpoints: latency-svc-h5rx8 [1.005681654s] Jan 20 22:17:58.551: INFO: Created: latency-svc-bj6bs Jan 20 22:17:58.565: INFO: Got endpoints: latency-svc-bj6bs [1.047792589s] Jan 20 22:17:58.718: INFO: Created: latency-svc-bt5tg Jan 20 22:17:58.730: INFO: Got endpoints: latency-svc-bt5tg [1.183602901s] Jan 20 22:17:58.761: INFO: Created: latency-svc-rdqfq Jan 20 22:17:58.780: INFO: Got endpoints: latency-svc-rdqfq [1.108528887s] Jan 20 22:17:58.807: INFO: Created: latency-svc-hz78q Jan 20 22:17:58.812: INFO: Got endpoints: latency-svc-hz78q [1.105326825s] Jan 20 22:17:58.870: INFO: Created: latency-svc-b7g4x Jan 20 22:17:58.884: INFO: Got endpoints: latency-svc-b7g4x [1.140772025s] Jan 20 22:17:58.928: INFO: Created: latency-svc-vsnqp Jan 20 22:17:58.961: INFO: Got endpoints: latency-svc-vsnqp [1.103642399s] Jan 20 22:17:59.044: INFO: Created: latency-svc-f976k Jan 20 22:17:59.071: INFO: Got endpoints: latency-svc-f976k [1.145953882s] Jan 20 22:17:59.115: INFO: Created: latency-svc-5vr5w Jan 20 22:17:59.118: INFO: Got endpoints: latency-svc-5vr5w [1.094353919s] Jan 20 22:17:59.298: INFO: Created: latency-svc-jbbdb Jan 20 22:17:59.332: INFO: Got endpoints: latency-svc-jbbdb [1.265287317s] Jan 20 22:17:59.362: INFO: Created: latency-svc-z7srh Jan 20 22:17:59.370: INFO: Got endpoints: latency-svc-z7srh [1.189178646s] Jan 20 22:17:59.479: INFO: Created: latency-svc-rm7n9 Jan 20 22:17:59.505: INFO: Got endpoints: latency-svc-rm7n9 [1.267405616s] Jan 20 22:17:59.510: INFO: Created: latency-svc-hs4l9 Jan 20 22:17:59.548: INFO: Got endpoints: latency-svc-hs4l9 [1.221199511s] Jan 20 22:17:59.626: INFO: Created: latency-svc-45rhw Jan 20 22:17:59.692: INFO: Got endpoints: latency-svc-45rhw [1.312490925s] Jan 20 22:17:59.784: INFO: Created: latency-svc-hv28v Jan 20 22:17:59.787: INFO: Got endpoints: latency-svc-hv28v [1.31022718s] Jan 20 22:17:59.849: INFO: Created: latency-svc-krx84 Jan 20 22:17:59.876: INFO: Got endpoints: latency-svc-krx84 [1.375061969s] Jan 20 22:18:00.024: INFO: Created: latency-svc-wp6dk Jan 20 22:18:00.068: INFO: Got endpoints: latency-svc-wp6dk [1.502620372s] Jan 20 22:18:00.068: INFO: Latencies: [157.964711ms 163.214473ms 199.711575ms 233.256291ms 330.015469ms 393.093766ms 476.182552ms 510.973652ms 545.850256ms 688.097246ms 746.250026ms 802.149493ms 821.17993ms 844.685761ms 846.566093ms 846.631862ms 858.767492ms 865.591153ms 866.757694ms 867.56707ms 869.087522ms 874.318119ms 890.534218ms 894.301383ms 896.867351ms 898.209494ms 899.886425ms 910.253556ms 919.297594ms 922.095484ms 923.403316ms 923.926097ms 924.148968ms 925.084341ms 927.879139ms 928.087778ms 931.57558ms 937.907607ms 939.02283ms 942.697342ms 944.38385ms 948.173056ms 951.664136ms 952.013948ms 954.701749ms 955.097061ms 955.313777ms 957.939879ms 959.89927ms 962.851993ms 963.972417ms 964.379955ms 969.512088ms 970.259903ms 971.246313ms 972.276496ms 974.544792ms 975.443766ms 978.256109ms 980.431761ms 983.74557ms 991.332858ms 995.727705ms 1.001631254s 1.003077293s 1.005681654s 1.008483268s 1.010446506s 1.011468461s 1.011915154s 1.013120352s 1.016529286s 1.020203669s 1.023238318s 1.023576465s 1.025420612s 1.027714804s 1.028352519s 1.028503467s 1.028848929s 1.029018933s 1.03045763s 1.031208706s 1.033004842s 1.034665499s 1.03804126s 1.042467044s 1.043909454s 1.045774216s 1.046553466s 1.047357614s 1.047445086s 1.047573171s 1.047792589s 1.055994679s 1.056493717s 1.059397913s 1.060447626s 1.061280754s 1.061297293s 1.064488794s 1.070083609s 1.070815086s 1.072523801s 1.078624528s 1.080344101s 1.080558136s 1.086233046s 1.08916136s 1.089966838s 1.094353919s 1.099398497s 1.099627478s 1.102807941s 1.103642399s 1.103797545s 1.105326825s 1.105430621s 1.108528887s 1.108620137s 1.111950679s 1.112015542s 1.112954783s 1.115567865s 1.123965787s 1.127497588s 1.127818s 1.137744602s 1.139425575s 1.140772025s 1.141962555s 1.145953882s 1.149897179s 1.155619136s 1.161235715s 1.162007525s 1.164640681s 1.170637414s 1.172646189s 1.178567315s 1.179090417s 1.179829602s 1.183602901s 1.186075924s 1.189178646s 1.190780566s 1.192705281s 1.197090986s 1.20340245s 1.208846191s 1.213253857s 1.216467616s 1.221199511s 1.223467042s 1.22434627s 1.239610621s 1.24014774s 1.24206683s 1.243299447s 1.244607177s 1.260916483s 1.263541431s 1.265287317s 1.267405616s 1.273291014s 1.27879557s 1.281994107s 1.284825363s 1.291305763s 1.295980235s 1.297908039s 1.298575203s 1.30123787s 1.301337978s 1.304722647s 1.31022718s 1.312490925s 1.32238465s 1.328692918s 1.331171227s 1.33283306s 1.33317371s 1.339533526s 1.343142147s 1.344681335s 1.347582718s 1.356515569s 1.358383014s 1.363378343s 1.373421456s 1.375061969s 1.376976082s 1.384527816s 1.391140193s 1.391711732s 1.39833131s 1.427877168s 1.449403936s 1.455002079s 1.502620372s] Jan 20 22:18:00.068: INFO: 50 %ile: 1.064488794s Jan 20 22:18:00.068: INFO: 90 %ile: 1.33283306s Jan 20 22:18:00.068: INFO: 99 %ile: 1.455002079s Jan 20 22:18:00.068: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:18:00.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1288" for this suite. • [SLOW TEST:22.319 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":201,"skipped":3443,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:18:00.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 20 22:18:00.246: INFO: Waiting up to 5m0s for pod "pod-5528bcab-96cf-476b-8cb5-05b5e164f358" in namespace "emptydir-3767" to be "success or failure" Jan 20 22:18:00.251: INFO: Pod "pod-5528bcab-96cf-476b-8cb5-05b5e164f358": Phase="Pending", Reason="", readiness=false. Elapsed: 5.424038ms Jan 20 22:18:02.259: INFO: Pod "pod-5528bcab-96cf-476b-8cb5-05b5e164f358": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013381874s Jan 20 22:18:04.268: INFO: Pod "pod-5528bcab-96cf-476b-8cb5-05b5e164f358": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022669497s Jan 20 22:18:06.940: INFO: Pod "pod-5528bcab-96cf-476b-8cb5-05b5e164f358": Phase="Pending", Reason="", readiness=false. Elapsed: 6.694086208s Jan 20 22:18:08.999: INFO: Pod "pod-5528bcab-96cf-476b-8cb5-05b5e164f358": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.753195819s STEP: Saw pod success Jan 20 22:18:08.999: INFO: Pod "pod-5528bcab-96cf-476b-8cb5-05b5e164f358" satisfied condition "success or failure" Jan 20 22:18:09.013: INFO: Trying to get logs from node jerma-node pod pod-5528bcab-96cf-476b-8cb5-05b5e164f358 container test-container: STEP: delete the pod Jan 20 22:18:09.137: INFO: Waiting for pod pod-5528bcab-96cf-476b-8cb5-05b5e164f358 to disappear Jan 20 22:18:09.153: INFO: Pod pod-5528bcab-96cf-476b-8cb5-05b5e164f358 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:18:09.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3767" for this suite. • [SLOW TEST:9.113 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:18:09.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-bac326b7-822e-497b-ad87-9bb8d6fbda41 in namespace container-probe-418 Jan 20 22:18:23.374: INFO: Started pod liveness-bac326b7-822e-497b-ad87-9bb8d6fbda41 in namespace container-probe-418 STEP: checking the pod's current state and verifying that restartCount is present Jan 20 22:18:23.384: INFO: Initial restart count of pod liveness-bac326b7-822e-497b-ad87-9bb8d6fbda41 is 0 Jan 20 22:18:47.563: INFO: Restart count of pod container-probe-418/liveness-bac326b7-822e-497b-ad87-9bb8d6fbda41 is now 1 (24.179088539s elapsed) Jan 20 22:19:05.647: INFO: Restart count of pod container-probe-418/liveness-bac326b7-822e-497b-ad87-9bb8d6fbda41 is now 2 (42.262943387s elapsed) Jan 20 22:19:25.878: INFO: Restart count of pod container-probe-418/liveness-bac326b7-822e-497b-ad87-9bb8d6fbda41 is now 3 (1m2.493718503s elapsed) Jan 20 22:19:46.000: INFO: Restart count of pod container-probe-418/liveness-bac326b7-822e-497b-ad87-9bb8d6fbda41 is now 4 (1m22.616118329s elapsed) Jan 20 22:20:55.036: INFO: Restart count of pod container-probe-418/liveness-bac326b7-822e-497b-ad87-9bb8d6fbda41 is now 5 (2m31.652139985s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:20:55.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-418" for this suite. • [SLOW TEST:165.988 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3498,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:20:55.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:21:03.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2814" for this suite. • [SLOW TEST:8.336 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3504,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:21:03.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 20 22:21:03.801: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24" in namespace "projected-5541" to be "success or failure" Jan 20 22:21:03.852: INFO: Pod "downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24": Phase="Pending", Reason="", readiness=false. Elapsed: 50.897339ms Jan 20 22:21:05.863: INFO: Pod "downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06138652s Jan 20 22:21:07.871: INFO: Pod "downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069766964s Jan 20 22:21:09.879: INFO: Pod "downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077793373s Jan 20 22:21:11.895: INFO: Pod "downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093577902s STEP: Saw pod success Jan 20 22:21:11.895: INFO: Pod "downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24" satisfied condition "success or failure" Jan 20 22:21:11.912: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24 container client-container: STEP: delete the pod Jan 20 22:21:12.161: INFO: Waiting for pod downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24 to disappear Jan 20 22:21:12.175: INFO: Pod downwardapi-volume-eaba6b09-ff74-4fcb-8d46-63f40cc8dc24 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:21:12.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5541" for this suite. • [SLOW TEST:8.760 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3527,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:21:12.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 20 22:21:21.021: INFO: Successfully updated pod "labelsupdate771e4f11-8e68-44ff-8573-676e3511cc99" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:21:23.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2979" for this suite. • [SLOW TEST:10.799 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3537,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:21:23.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 22:21:23.174: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 20 22:21:23.259: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 20 22:21:28.284: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 20 22:21:32.303: INFO: Creating deployment "test-rolling-update-deployment" Jan 20 22:21:32.313: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 20 22:21:32.320: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 20 22:21:34.336: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 20 22:21:34.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:21:36.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:21:38.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715155692, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 22:21:40.354: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 20 22:21:40.376: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9040 /apis/apps/v1/namespaces/deployment-9040/deployments/test-rolling-update-deployment 19da4a29-ca51-46ef-96a8-449e558f6f5b 3267276 1 2020-01-20 22:21:32 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030c60b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-20 22:21:32 +0000 UTC,LastTransitionTime:2020-01-20 22:21:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-20 22:21:38 +0000 UTC,LastTransitionTime:2020-01-20 22:21:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 20 22:21:40.381: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-9040 /apis/apps/v1/namespaces/deployment-9040/replicasets/test-rolling-update-deployment-67cf4f6444 6742e572-e9b5-41f7-9a09-6e15b2e7b543 3267266 1 2020-01-20 22:21:32 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 19da4a29-ca51-46ef-96a8-449e558f6f5b 0xc002c4e867 0xc002c4e868}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c4e8d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 20 22:21:40.381: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 20 22:21:40.381: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9040 /apis/apps/v1/namespaces/deployment-9040/replicasets/test-rolling-update-controller c5beb5dc-5182-4c21-968e-30e3f72e3210 3267275 2 2020-01-20 22:21:23 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 19da4a29-ca51-46ef-96a8-449e558f6f5b 0xc002c4e797 0xc002c4e798}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c4e7f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 20 22:21:40.387: INFO: Pod "test-rolling-update-deployment-67cf4f6444-vtn5k" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-vtn5k test-rolling-update-deployment-67cf4f6444- deployment-9040 /api/v1/namespaces/deployment-9040/pods/test-rolling-update-deployment-67cf4f6444-vtn5k ca35209f-0e5d-45c8-a727-109132c02c90 3267265 0 2020-01-20 22:21:32 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 6742e572-e9b5-41f7-9a09-6e15b2e7b543 0xc002c4ed27 0xc002c4ed28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bb2wp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bb2wp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bb2wp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:21:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:21:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:21:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-20 22:21:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 22:21:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://b31e4429c0ae9a13198d67874bfb6d43e5e13c6297d137dc7df11c51bcbf3dab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:21:40.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9040" for this suite. • [SLOW TEST:17.307 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":207,"skipped":3545,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:21:40.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-9rgk STEP: Creating a pod to test atomic-volume-subpath Jan 20 22:21:40.644: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9rgk" in namespace "subpath-1377" to be "success or failure" Jan 20 22:21:40.708: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Pending", Reason="", readiness=false. Elapsed: 64.334051ms Jan 20 22:21:42.715: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071563922s Jan 20 22:21:44.720: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076218105s Jan 20 22:21:46.727: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083693584s Jan 20 22:21:48.736: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09229032s Jan 20 22:21:50.747: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 10.103293929s Jan 20 22:21:52.752: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 12.108165412s Jan 20 22:21:54.762: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 14.118332924s Jan 20 22:21:56.769: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 16.12544118s Jan 20 22:21:58.778: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 18.134059079s Jan 20 22:22:00.784: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 20.140369496s Jan 20 22:22:02.790: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 22.146680274s Jan 20 22:22:04.799: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 24.15523633s Jan 20 22:22:06.806: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 26.16197123s Jan 20 22:22:08.813: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 28.169139019s Jan 20 22:22:10.822: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Running", Reason="", readiness=true. Elapsed: 30.178014528s Jan 20 22:22:12.829: INFO: Pod "pod-subpath-test-configmap-9rgk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.185350659s STEP: Saw pod success Jan 20 22:22:12.829: INFO: Pod "pod-subpath-test-configmap-9rgk" satisfied condition "success or failure" Jan 20 22:22:12.833: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-9rgk container test-container-subpath-configmap-9rgk: STEP: delete the pod Jan 20 22:22:12.911: INFO: Waiting for pod pod-subpath-test-configmap-9rgk to disappear Jan 20 22:22:12.922: INFO: Pod pod-subpath-test-configmap-9rgk no longer exists STEP: Deleting pod pod-subpath-test-configmap-9rgk Jan 20 22:22:12.922: INFO: Deleting pod "pod-subpath-test-configmap-9rgk" in namespace "subpath-1377" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:22:12.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1377" for this suite. • [SLOW TEST:32.545 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":208,"skipped":3554,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:22:12.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 20 22:22:13.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8024' Jan 20 22:22:13.357: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 22:22:13.357: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jan 20 22:22:13.411: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-95z2b] Jan 20 22:22:13.412: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-95z2b" in namespace "kubectl-8024" to be "running and ready" Jan 20 22:22:13.465: INFO: Pod "e2e-test-httpd-rc-95z2b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.234267ms Jan 20 22:22:15.471: INFO: Pod "e2e-test-httpd-rc-95z2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059711497s Jan 20 22:22:17.482: INFO: Pod "e2e-test-httpd-rc-95z2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070722696s Jan 20 22:22:19.489: INFO: Pod "e2e-test-httpd-rc-95z2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077301571s Jan 20 22:22:21.495: INFO: Pod "e2e-test-httpd-rc-95z2b": Phase="Running", Reason="", readiness=true. Elapsed: 8.08327261s Jan 20 22:22:21.495: INFO: Pod "e2e-test-httpd-rc-95z2b" satisfied condition "running and ready" Jan 20 22:22:21.495: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-95z2b] Jan 20 22:22:21.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8024' Jan 20 22:22:21.693: INFO: stderr: "" Jan 20 22:22:21.694: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Mon Jan 20 22:22:18.967529 2020] [mpm_event:notice] [pid 1:tid 139899676552040] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Jan 20 22:22:18.967623 2020] [core:notice] [pid 1:tid 139899676552040] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 20 22:22:21.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8024' Jan 20 22:22:21.996: INFO: stderr: "" Jan 20 22:22:21.996: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:22:21.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8024" for this suite. • [SLOW TEST:9.142 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":209,"skipped":3557,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:22:22.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:22:33.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3152" for this suite. • [SLOW TEST:11.401 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":210,"skipped":3559,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:22:33.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jan 20 22:22:33.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 20 22:22:33.829: INFO: stderr: "" Jan 20 22:22:33.829: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:22:33.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7150" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":211,"skipped":3561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:22:33.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 20 22:22:48.092: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 20 22:22:48.136: INFO: Pod pod-with-poststart-http-hook still exists Jan 20 22:22:50.137: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 20 22:22:50.150: INFO: Pod pod-with-poststart-http-hook still exists Jan 20 22:22:52.137: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 20 22:22:52.152: INFO: Pod pod-with-poststart-http-hook still exists Jan 20 22:22:54.137: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 20 22:22:54.143: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:22:54.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2414" for this suite. • [SLOW TEST:20.309 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3584,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:22:54.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-c7d8097e-737c-4610-bac2-7f83a2baf86a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c7d8097e-737c-4610-bac2-7f83a2baf86a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:23:06.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7383" for this suite. • [SLOW TEST:12.321 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3593,"failed":0} [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:23:06.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jan 20 22:23:06.555: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jan 20 22:23:06.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-399' Jan 20 22:23:07.090: INFO: stderr: "" Jan 20 22:23:07.090: INFO: stdout: "service/agnhost-slave created\n" Jan 20 22:23:07.091: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jan 20 22:23:07.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-399' Jan 20 22:23:07.709: INFO: stderr: "" Jan 20 22:23:07.709: INFO: stdout: "service/agnhost-master created\n" Jan 20 22:23:07.710: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 20 22:23:07.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-399' Jan 20 22:23:08.355: INFO: stderr: "" Jan 20 22:23:08.355: INFO: stdout: "service/frontend created\n" Jan 20 22:23:08.356: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 20 22:23:08.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-399' Jan 20 22:23:08.850: INFO: stderr: "" Jan 20 22:23:08.851: INFO: stdout: "deployment.apps/frontend created\n" Jan 20 22:23:08.855: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 20 22:23:08.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-399' Jan 20 22:23:09.280: INFO: stderr: "" Jan 20 22:23:09.280: INFO: stdout: "deployment.apps/agnhost-master created\n" Jan 20 22:23:09.281: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 20 22:23:09.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-399' Jan 20 22:23:09.891: INFO: stderr: "" Jan 20 22:23:09.891: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jan 20 22:23:09.891: INFO: Waiting for all frontend pods to be Running. Jan 20 22:23:29.946: INFO: Waiting for frontend to serve content. Jan 20 22:23:30.020: INFO: Trying to add a new entry to the guestbook. Jan 20 22:23:30.045: INFO: Verifying that added entry can be retrieved. Jan 20 22:23:30.087: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Jan 20 22:23:35.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-399' Jan 20 22:23:35.404: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 22:23:35.405: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jan 20 22:23:35.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-399' Jan 20 22:23:35.558: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 22:23:35.558: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 20 22:23:35.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-399' Jan 20 22:23:35.734: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 22:23:35.734: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 20 22:23:35.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-399' Jan 20 22:23:35.874: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 22:23:35.874: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 20 22:23:35.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-399' Jan 20 22:23:36.033: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 22:23:36.033: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 20 22:23:36.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-399' Jan 20 22:23:36.243: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 22:23:36.243: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:23:36.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-399" for this suite. • [SLOW TEST:29.889 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":214,"skipped":3593,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:23:36.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8542 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-8542 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8542 Jan 20 22:23:38.421: INFO: Found 0 stateful pods, waiting for 1 Jan 20 22:23:48.437: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 20 22:23:58.440: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 20 22:23:58.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 22:23:58.884: INFO: stderr: "I0120 22:23:58.702975 2713 log.go:172] (0xc000112e70) (0xc0005d3f40) Create stream\nI0120 22:23:58.703256 2713 log.go:172] (0xc000112e70) (0xc0005d3f40) Stream added, broadcasting: 1\nI0120 22:23:58.708639 2713 log.go:172] (0xc000112e70) Reply frame received for 1\nI0120 22:23:58.708690 2713 log.go:172] (0xc000112e70) (0xc000255680) Create stream\nI0120 22:23:58.708698 2713 log.go:172] (0xc000112e70) (0xc000255680) Stream added, broadcasting: 3\nI0120 22:23:58.710227 2713 log.go:172] (0xc000112e70) Reply frame received for 3\nI0120 22:23:58.710258 2713 log.go:172] (0xc000112e70) (0xc000255720) Create stream\nI0120 22:23:58.710266 2713 log.go:172] (0xc000112e70) (0xc000255720) Stream added, broadcasting: 5\nI0120 22:23:58.711233 2713 log.go:172] (0xc000112e70) Reply frame received for 5\nI0120 22:23:58.776275 2713 log.go:172] (0xc000112e70) Data frame received for 5\nI0120 22:23:58.776361 2713 log.go:172] (0xc000255720) (5) Data frame handling\nI0120 22:23:58.776382 2713 log.go:172] (0xc000255720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 22:23:58.802804 2713 log.go:172] (0xc000112e70) Data frame received for 3\nI0120 22:23:58.802845 2713 log.go:172] (0xc000255680) (3) Data frame handling\nI0120 22:23:58.802861 2713 log.go:172] (0xc000255680) (3) Data frame sent\nI0120 22:23:58.872564 2713 log.go:172] (0xc000112e70) Data frame received for 1\nI0120 22:23:58.872709 2713 log.go:172] (0xc0005d3f40) (1) Data frame handling\nI0120 22:23:58.872755 2713 log.go:172] (0xc0005d3f40) (1) Data frame sent\nI0120 22:23:58.872802 2713 log.go:172] (0xc000112e70) (0xc000255680) Stream removed, broadcasting: 3\nI0120 22:23:58.872858 2713 log.go:172] (0xc000112e70) (0xc0005d3f40) Stream removed, broadcasting: 1\nI0120 22:23:58.872880 2713 log.go:172] (0xc000112e70) (0xc000255720) Stream removed, broadcasting: 5\nI0120 22:23:58.872903 2713 log.go:172] (0xc000112e70) Go away received\nI0120 22:23:58.873859 2713 log.go:172] (0xc000112e70) (0xc0005d3f40) Stream removed, broadcasting: 1\nI0120 22:23:58.873870 2713 log.go:172] (0xc000112e70) (0xc000255680) Stream removed, broadcasting: 3\nI0120 22:23:58.873874 2713 log.go:172] (0xc000112e70) (0xc000255720) Stream removed, broadcasting: 5\n" Jan 20 22:23:58.884: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 22:23:58.884: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 22:23:58.893: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 20 22:24:08.914: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 20 22:24:08.914: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 22:24:08.939: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:08.939: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:08.940: INFO: Jan 20 22:24:08.940: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 20 22:24:10.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986603964s Jan 20 22:24:11.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.369093149s Jan 20 22:24:12.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.359867505s Jan 20 22:24:13.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.334247572s Jan 20 22:24:15.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.321869887s Jan 20 22:24:16.213: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.871556068s Jan 20 22:24:17.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.712894126s Jan 20 22:24:18.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 481.227316ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8542 Jan 20 22:24:19.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:24:19.991: INFO: stderr: "I0120 22:24:19.727245 2733 log.go:172] (0xc000b28000) (0xc0006c8640) Create stream\nI0120 22:24:19.727541 2733 log.go:172] (0xc000b28000) (0xc0006c8640) Stream added, broadcasting: 1\nI0120 22:24:19.734930 2733 log.go:172] (0xc000b28000) Reply frame received for 1\nI0120 22:24:19.734999 2733 log.go:172] (0xc000b28000) (0xc0004f5400) Create stream\nI0120 22:24:19.735013 2733 log.go:172] (0xc000b28000) (0xc0004f5400) Stream added, broadcasting: 3\nI0120 22:24:19.737126 2733 log.go:172] (0xc000b28000) Reply frame received for 3\nI0120 22:24:19.737159 2733 log.go:172] (0xc000b28000) (0xc00095c000) Create stream\nI0120 22:24:19.737175 2733 log.go:172] (0xc000b28000) (0xc00095c000) Stream added, broadcasting: 5\nI0120 22:24:19.739163 2733 log.go:172] (0xc000b28000) Reply frame received for 5\nI0120 22:24:19.863001 2733 log.go:172] (0xc000b28000) Data frame received for 5\nI0120 22:24:19.863168 2733 log.go:172] (0xc00095c000) (5) Data frame handling\nI0120 22:24:19.863201 2733 log.go:172] (0xc00095c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 22:24:19.863285 2733 log.go:172] (0xc000b28000) Data frame received for 3\nI0120 22:24:19.863318 2733 log.go:172] (0xc0004f5400) (3) Data frame handling\nI0120 22:24:19.863334 2733 log.go:172] (0xc0004f5400) (3) Data frame sent\nI0120 22:24:19.983044 2733 log.go:172] (0xc000b28000) (0xc0004f5400) Stream removed, broadcasting: 3\nI0120 22:24:19.983139 2733 log.go:172] (0xc000b28000) Data frame received for 1\nI0120 22:24:19.983165 2733 log.go:172] (0xc000b28000) (0xc00095c000) Stream removed, broadcasting: 5\nI0120 22:24:19.983189 2733 log.go:172] (0xc0006c8640) (1) Data frame handling\nI0120 22:24:19.983214 2733 log.go:172] (0xc0006c8640) (1) Data frame sent\nI0120 22:24:19.983223 2733 log.go:172] (0xc000b28000) (0xc0006c8640) Stream removed, broadcasting: 1\nI0120 22:24:19.983235 2733 log.go:172] (0xc000b28000) Go away received\nI0120 22:24:19.984434 2733 log.go:172] (0xc000b28000) (0xc0006c8640) Stream removed, broadcasting: 1\nI0120 22:24:19.984453 2733 log.go:172] (0xc000b28000) (0xc0004f5400) Stream removed, broadcasting: 3\nI0120 22:24:19.984462 2733 log.go:172] (0xc000b28000) (0xc00095c000) Stream removed, broadcasting: 5\n" Jan 20 22:24:19.992: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 22:24:19.992: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 22:24:19.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:24:20.340: INFO: stderr: "I0120 22:24:20.166372 2753 log.go:172] (0xc0006880b0) (0xc000509400) Create stream\nI0120 22:24:20.166591 2753 log.go:172] (0xc0006880b0) (0xc000509400) Stream added, broadcasting: 1\nI0120 22:24:20.173125 2753 log.go:172] (0xc0006880b0) Reply frame received for 1\nI0120 22:24:20.173266 2753 log.go:172] (0xc0006880b0) (0xc0006bf9a0) Create stream\nI0120 22:24:20.173284 2753 log.go:172] (0xc0006880b0) (0xc0006bf9a0) Stream added, broadcasting: 3\nI0120 22:24:20.174394 2753 log.go:172] (0xc0006880b0) Reply frame received for 3\nI0120 22:24:20.174462 2753 log.go:172] (0xc0006880b0) (0xc0001f6000) Create stream\nI0120 22:24:20.174476 2753 log.go:172] (0xc0006880b0) (0xc0001f6000) Stream added, broadcasting: 5\nI0120 22:24:20.178260 2753 log.go:172] (0xc0006880b0) Reply frame received for 5\nI0120 22:24:20.241917 2753 log.go:172] (0xc0006880b0) Data frame received for 5\nI0120 22:24:20.241987 2753 log.go:172] (0xc0001f6000) (5) Data frame handling\nI0120 22:24:20.242003 2753 log.go:172] (0xc0001f6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0120 22:24:20.242024 2753 log.go:172] (0xc0006880b0) Data frame received for 3\nI0120 22:24:20.242032 2753 log.go:172] (0xc0006bf9a0) (3) Data frame handling\nI0120 22:24:20.242041 2753 log.go:172] (0xc0006bf9a0) (3) Data frame sent\nI0120 22:24:20.327634 2753 log.go:172] (0xc0006880b0) Data frame received for 1\nI0120 22:24:20.327718 2753 log.go:172] (0xc0006880b0) (0xc0006bf9a0) Stream removed, broadcasting: 3\nI0120 22:24:20.327856 2753 log.go:172] (0xc000509400) (1) Data frame handling\nI0120 22:24:20.327904 2753 log.go:172] (0xc000509400) (1) Data frame sent\nI0120 22:24:20.327945 2753 log.go:172] (0xc0006880b0) (0xc0001f6000) Stream removed, broadcasting: 5\nI0120 22:24:20.327982 2753 log.go:172] (0xc0006880b0) (0xc000509400) Stream removed, broadcasting: 1\nI0120 22:24:20.328006 2753 log.go:172] (0xc0006880b0) Go away received\nI0120 22:24:20.329083 2753 log.go:172] (0xc0006880b0) (0xc000509400) Stream removed, broadcasting: 1\nI0120 22:24:20.329102 2753 log.go:172] (0xc0006880b0) (0xc0006bf9a0) Stream removed, broadcasting: 3\nI0120 22:24:20.329106 2753 log.go:172] (0xc0006880b0) (0xc0001f6000) Stream removed, broadcasting: 5\n" Jan 20 22:24:20.340: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 22:24:20.340: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 22:24:20.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:24:20.827: INFO: stderr: "I0120 22:24:20.622956 2775 log.go:172] (0xc000c71340) (0xc000c84500) Create stream\nI0120 22:24:20.623430 2775 log.go:172] (0xc000c71340) (0xc000c84500) Stream added, broadcasting: 1\nI0120 22:24:20.642273 2775 log.go:172] (0xc000c71340) Reply frame received for 1\nI0120 22:24:20.642450 2775 log.go:172] (0xc000c71340) (0xc0006ba640) Create stream\nI0120 22:24:20.642475 2775 log.go:172] (0xc000c71340) (0xc0006ba640) Stream added, broadcasting: 3\nI0120 22:24:20.644761 2775 log.go:172] (0xc000c71340) Reply frame received for 3\nI0120 22:24:20.644869 2775 log.go:172] (0xc000c71340) (0xc000513400) Create stream\nI0120 22:24:20.644881 2775 log.go:172] (0xc000c71340) (0xc000513400) Stream added, broadcasting: 5\nI0120 22:24:20.649442 2775 log.go:172] (0xc000c71340) Reply frame received for 5\nI0120 22:24:20.750896 2775 log.go:172] (0xc000c71340) Data frame received for 3\nI0120 22:24:20.751078 2775 log.go:172] (0xc0006ba640) (3) Data frame handling\nI0120 22:24:20.751108 2775 log.go:172] (0xc0006ba640) (3) Data frame sent\nI0120 22:24:20.751182 2775 log.go:172] (0xc000c71340) Data frame received for 5\nI0120 22:24:20.751201 2775 log.go:172] (0xc000513400) (5) Data frame handling\nI0120 22:24:20.751228 2775 log.go:172] (0xc000513400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0120 22:24:20.812752 2775 log.go:172] (0xc000c71340) Data frame received for 1\nI0120 22:24:20.812859 2775 log.go:172] (0xc000c84500) (1) Data frame handling\nI0120 22:24:20.812875 2775 log.go:172] (0xc000c84500) (1) Data frame sent\nI0120 22:24:20.812896 2775 log.go:172] (0xc000c71340) (0xc000c84500) Stream removed, broadcasting: 1\nI0120 22:24:20.813679 2775 log.go:172] (0xc000c71340) (0xc0006ba640) Stream removed, broadcasting: 3\nI0120 22:24:20.813792 2775 log.go:172] (0xc000c71340) (0xc000513400) Stream removed, broadcasting: 5\nI0120 22:24:20.813820 2775 log.go:172] (0xc000c71340) Go away received\nI0120 22:24:20.813901 2775 log.go:172] (0xc000c71340) (0xc000c84500) Stream removed, broadcasting: 1\nI0120 22:24:20.813928 2775 log.go:172] (0xc000c71340) (0xc0006ba640) Stream removed, broadcasting: 3\nI0120 22:24:20.813945 2775 log.go:172] (0xc000c71340) (0xc000513400) Stream removed, broadcasting: 5\n" Jan 20 22:24:20.827: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 22:24:20.827: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 22:24:20.833: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 22:24:20.833: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 22:24:20.833: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 20 22:24:20.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 22:24:21.162: INFO: stderr: "I0120 22:24:21.004615 2796 log.go:172] (0xc000afed10) (0xc000a52640) Create stream\nI0120 22:24:21.004843 2796 log.go:172] (0xc000afed10) (0xc000a52640) Stream added, broadcasting: 1\nI0120 22:24:21.008554 2796 log.go:172] (0xc000afed10) Reply frame received for 1\nI0120 22:24:21.008617 2796 log.go:172] (0xc000afed10) (0xc000659d60) Create stream\nI0120 22:24:21.008632 2796 log.go:172] (0xc000afed10) (0xc000659d60) Stream added, broadcasting: 3\nI0120 22:24:21.009883 2796 log.go:172] (0xc000afed10) Reply frame received for 3\nI0120 22:24:21.009904 2796 log.go:172] (0xc000afed10) (0xc0009b4000) Create stream\nI0120 22:24:21.009925 2796 log.go:172] (0xc000afed10) (0xc0009b4000) Stream added, broadcasting: 5\nI0120 22:24:21.010830 2796 log.go:172] (0xc000afed10) Reply frame received for 5\nI0120 22:24:21.071981 2796 log.go:172] (0xc000afed10) Data frame received for 5\nI0120 22:24:21.072049 2796 log.go:172] (0xc0009b4000) (5) Data frame handling\nI0120 22:24:21.072060 2796 log.go:172] (0xc0009b4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 22:24:21.072081 2796 log.go:172] (0xc000afed10) Data frame received for 3\nI0120 22:24:21.072086 2796 log.go:172] (0xc000659d60) (3) Data frame handling\nI0120 22:24:21.072091 2796 log.go:172] (0xc000659d60) (3) Data frame sent\nI0120 22:24:21.148193 2796 log.go:172] (0xc000afed10) Data frame received for 1\nI0120 22:24:21.148339 2796 log.go:172] (0xc000a52640) (1) Data frame handling\nI0120 22:24:21.148365 2796 log.go:172] (0xc000a52640) (1) Data frame sent\nI0120 22:24:21.148391 2796 log.go:172] (0xc000afed10) (0xc000a52640) Stream removed, broadcasting: 1\nI0120 22:24:21.149831 2796 log.go:172] (0xc000afed10) (0xc0009b4000) Stream removed, broadcasting: 5\nI0120 22:24:21.149993 2796 log.go:172] (0xc000afed10) (0xc000659d60) Stream removed, broadcasting: 3\nI0120 22:24:21.150016 2796 log.go:172] (0xc000afed10) Go away received\nI0120 22:24:21.150259 2796 log.go:172] (0xc000afed10) (0xc000a52640) Stream removed, broadcasting: 1\nI0120 22:24:21.150373 2796 log.go:172] (0xc000afed10) (0xc000659d60) Stream removed, broadcasting: 3\nI0120 22:24:21.150389 2796 log.go:172] (0xc000afed10) (0xc0009b4000) Stream removed, broadcasting: 5\n" Jan 20 22:24:21.162: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 22:24:21.162: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 22:24:21.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 22:24:21.573: INFO: stderr: "I0120 22:24:21.367208 2814 log.go:172] (0xc000a5a630) (0xc000689e00) Create stream\nI0120 22:24:21.367475 2814 log.go:172] (0xc000a5a630) (0xc000689e00) Stream added, broadcasting: 1\nI0120 22:24:21.370096 2814 log.go:172] (0xc000a5a630) Reply frame received for 1\nI0120 22:24:21.370161 2814 log.go:172] (0xc000a5a630) (0xc00064e6e0) Create stream\nI0120 22:24:21.370176 2814 log.go:172] (0xc000a5a630) (0xc00064e6e0) Stream added, broadcasting: 3\nI0120 22:24:21.371208 2814 log.go:172] (0xc000a5a630) Reply frame received for 3\nI0120 22:24:21.371241 2814 log.go:172] (0xc000a5a630) (0xc0005174a0) Create stream\nI0120 22:24:21.371248 2814 log.go:172] (0xc000a5a630) (0xc0005174a0) Stream added, broadcasting: 5\nI0120 22:24:21.372206 2814 log.go:172] (0xc000a5a630) Reply frame received for 5\nI0120 22:24:21.437805 2814 log.go:172] (0xc000a5a630) Data frame received for 5\nI0120 22:24:21.437914 2814 log.go:172] (0xc0005174a0) (5) Data frame handling\nI0120 22:24:21.437942 2814 log.go:172] (0xc0005174a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 22:24:21.466566 2814 log.go:172] (0xc000a5a630) Data frame received for 3\nI0120 22:24:21.466755 2814 log.go:172] (0xc00064e6e0) (3) Data frame handling\nI0120 22:24:21.466791 2814 log.go:172] (0xc00064e6e0) (3) Data frame sent\nI0120 22:24:21.555559 2814 log.go:172] (0xc000a5a630) (0xc00064e6e0) Stream removed, broadcasting: 3\nI0120 22:24:21.555902 2814 log.go:172] (0xc000a5a630) Data frame received for 1\nI0120 22:24:21.555920 2814 log.go:172] (0xc000689e00) (1) Data frame handling\nI0120 22:24:21.555942 2814 log.go:172] (0xc000689e00) (1) Data frame sent\nI0120 22:24:21.555946 2814 log.go:172] (0xc000a5a630) (0xc000689e00) Stream removed, broadcasting: 1\nI0120 22:24:21.556474 2814 log.go:172] (0xc000a5a630) (0xc0005174a0) Stream removed, broadcasting: 5\nI0120 22:24:21.556795 2814 log.go:172] (0xc000a5a630) Go away received\nI0120 22:24:21.557619 2814 log.go:172] (0xc000a5a630) (0xc000689e00) Stream removed, broadcasting: 1\nI0120 22:24:21.557678 2814 log.go:172] (0xc000a5a630) (0xc00064e6e0) Stream removed, broadcasting: 3\nI0120 22:24:21.557687 2814 log.go:172] (0xc000a5a630) (0xc0005174a0) Stream removed, broadcasting: 5\n" Jan 20 22:24:21.574: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 22:24:21.574: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 22:24:21.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 22:24:21.907: INFO: stderr: "I0120 22:24:21.718882 2835 log.go:172] (0xc00097f3f0) (0xc00096c500) Create stream\nI0120 22:24:21.719251 2835 log.go:172] (0xc00097f3f0) (0xc00096c500) Stream added, broadcasting: 1\nI0120 22:24:21.726898 2835 log.go:172] (0xc00097f3f0) Reply frame received for 1\nI0120 22:24:21.728387 2835 log.go:172] (0xc00097f3f0) (0xc00096a280) Create stream\nI0120 22:24:21.728490 2835 log.go:172] (0xc00097f3f0) (0xc00096a280) Stream added, broadcasting: 3\nI0120 22:24:21.731567 2835 log.go:172] (0xc00097f3f0) Reply frame received for 3\nI0120 22:24:21.731594 2835 log.go:172] (0xc00097f3f0) (0xc00071a640) Create stream\nI0120 22:24:21.731609 2835 log.go:172] (0xc00097f3f0) (0xc00071a640) Stream added, broadcasting: 5\nI0120 22:24:21.732852 2835 log.go:172] (0xc00097f3f0) Reply frame received for 5\nI0120 22:24:21.792719 2835 log.go:172] (0xc00097f3f0) Data frame received for 5\nI0120 22:24:21.792901 2835 log.go:172] (0xc00071a640) (5) Data frame handling\nI0120 22:24:21.792970 2835 log.go:172] (0xc00071a640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 22:24:21.822374 2835 log.go:172] (0xc00097f3f0) Data frame received for 3\nI0120 22:24:21.822448 2835 log.go:172] (0xc00096a280) (3) Data frame handling\nI0120 22:24:21.822469 2835 log.go:172] (0xc00096a280) (3) Data frame sent\nI0120 22:24:21.897466 2835 log.go:172] (0xc00097f3f0) Data frame received for 1\nI0120 22:24:21.897557 2835 log.go:172] (0xc00096c500) (1) Data frame handling\nI0120 22:24:21.897603 2835 log.go:172] (0xc00096c500) (1) Data frame sent\nI0120 22:24:21.897753 2835 log.go:172] (0xc00097f3f0) (0xc00071a640) Stream removed, broadcasting: 5\nI0120 22:24:21.897889 2835 log.go:172] (0xc00097f3f0) (0xc00096c500) Stream removed, broadcasting: 1\nI0120 22:24:21.898687 2835 log.go:172] (0xc00097f3f0) (0xc00096a280) Stream removed, broadcasting: 3\nI0120 22:24:21.898861 2835 log.go:172] (0xc00097f3f0) (0xc00096c500) Stream removed, broadcasting: 1\nI0120 22:24:21.898910 2835 log.go:172] (0xc00097f3f0) (0xc00096a280) Stream removed, broadcasting: 3\nI0120 22:24:21.898995 2835 log.go:172] (0xc00097f3f0) (0xc00071a640) Stream removed, broadcasting: 5\n" Jan 20 22:24:21.908: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 22:24:21.908: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 22:24:21.908: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 22:24:21.913: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 20 22:24:31.929: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 20 22:24:31.929: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 20 22:24:31.929: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 20 22:24:31.951: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:31.951: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:31.952: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:31.952: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:31.952: INFO: Jan 20 22:24:31.952: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 22:24:33.528: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:33.528: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:33.528: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:33.529: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:33.529: INFO: Jan 20 22:24:33.529: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 22:24:34.541: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:34.541: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:34.541: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:34.541: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:34.541: INFO: Jan 20 22:24:34.541: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 22:24:35.985: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:35.985: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:35.985: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:35.985: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:35.985: INFO: Jan 20 22:24:35.985: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 22:24:36.999: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:37.000: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:37.000: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:37.000: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:37.000: INFO: Jan 20 22:24:37.000: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 22:24:38.008: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:38.009: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:38.009: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:38.009: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:38.009: INFO: Jan 20 22:24:38.009: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 22:24:39.020: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:39.020: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:39.020: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:39.020: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:39.020: INFO: Jan 20 22:24:39.020: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 22:24:40.028: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:40.028: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:40.028: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:40.028: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:40.028: INFO: Jan 20 22:24:40.028: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 22:24:41.040: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 22:24:41.040: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:23:38 +0000 UTC }] Jan 20 22:24:41.040: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:41.040: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 22:24:08 +0000 UTC }] Jan 20 22:24:41.040: INFO: Jan 20 22:24:41.040: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8542 Jan 20 22:24:42.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:24:42.273: INFO: rc: 1 Jan 20 22:24:42.274: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 20 22:24:52.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:24:52.450: INFO: rc: 1 Jan 20 22:24:52.450: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:25:02.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:25:02.627: INFO: rc: 1 Jan 20 22:25:02.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:25:12.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:25:12.813: INFO: rc: 1 Jan 20 22:25:12.814: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:25:22.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:25:23.007: INFO: rc: 1 Jan 20 22:25:23.007: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:25:33.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:25:33.190: INFO: rc: 1 Jan 20 22:25:33.191: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:25:43.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:25:43.387: INFO: rc: 1 Jan 20 22:25:43.387: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:25:53.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:25:53.592: INFO: rc: 1 Jan 20 22:25:53.593: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:26:03.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:26:03.773: INFO: rc: 1 Jan 20 22:26:03.774: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:26:13.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:26:13.999: INFO: rc: 1 Jan 20 22:26:13.999: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:26:24.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:26:24.115: INFO: rc: 1 Jan 20 22:26:24.115: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:26:34.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:26:34.279: INFO: rc: 1 Jan 20 22:26:34.279: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:26:44.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:26:46.495: INFO: rc: 1 Jan 20 22:26:46.495: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:26:56.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:26:56.674: INFO: rc: 1 Jan 20 22:26:56.675: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:27:06.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:27:06.893: INFO: rc: 1 Jan 20 22:27:06.893: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:27:16.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:27:17.080: INFO: rc: 1 Jan 20 22:27:17.081: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:27:27.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:27:27.275: INFO: rc: 1 Jan 20 22:27:27.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:27:37.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:27:37.627: INFO: rc: 1 Jan 20 22:27:37.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:27:47.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:27:47.823: INFO: rc: 1 Jan 20 22:27:47.824: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:27:57.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:27:58.015: INFO: rc: 1 Jan 20 22:27:58.016: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:28:08.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:28:08.173: INFO: rc: 1 Jan 20 22:28:08.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:28:18.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:28:18.327: INFO: rc: 1 Jan 20 22:28:18.327: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:28:28.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:28:28.656: INFO: rc: 1 Jan 20 22:28:28.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:28:38.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:28:38.855: INFO: rc: 1 Jan 20 22:28:38.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:28:48.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:28:49.082: INFO: rc: 1 Jan 20 22:28:49.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:28:59.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:28:59.280: INFO: rc: 1 Jan 20 22:28:59.280: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:29:09.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:29:09.495: INFO: rc: 1 Jan 20 22:29:09.495: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:29:19.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:29:19.689: INFO: rc: 1 Jan 20 22:29:19.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:29:29.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:29:29.919: INFO: rc: 1 Jan 20 22:29:29.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:29:39.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:29:40.128: INFO: rc: 1 Jan 20 22:29:40.128: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 20 22:29:50.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8542 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 22:29:50.322: INFO: rc: 1 Jan 20 22:29:50.323: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 20 22:29:50.323: INFO: Scaling statefulset ss to 0 Jan 20 22:29:50.343: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 20 22:29:50.345: INFO: Deleting all statefulset in ns statefulset-8542 Jan 20 22:29:50.348: INFO: Scaling statefulset ss to 0 Jan 20 22:29:50.358: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 22:29:50.360: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 20 22:29:50.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8542" for this suite. • [SLOW TEST:374.026 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":215,"skipped":3611,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 20 22:29:50.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 20 22:29:50.581: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/:
alternatives.log
apt/
... (200; 28.901904ms)
Jan 20 22:29:50.614: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 32.317433ms)
Jan 20 22:29:50.618: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.557918ms)
Jan 20 22:29:50.626: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 7.410636ms)
Jan 20 22:29:50.630: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.551288ms)
Jan 20 22:29:50.669: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 38.197666ms)
Jan 20 22:29:50.675: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.749735ms)
Jan 20 22:29:50.678: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.491751ms)
Jan 20 22:29:50.681: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.176844ms)
Jan 20 22:29:50.684: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.108801ms)
Jan 20 22:29:50.688: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.374062ms)
Jan 20 22:29:50.692: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.174852ms)
Jan 20 22:29:50.695: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.161238ms)
Jan 20 22:29:50.698: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.009333ms)
Jan 20 22:29:50.702: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.269243ms)
Jan 20 22:29:50.706: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.436724ms)
Jan 20 22:29:50.711: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.667967ms)
Jan 20 22:29:50.739: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 28.177732ms)
Jan 20 22:29:50.743: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.905725ms)
Jan 20 22:29:50.747: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.594947ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:29:50.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1237" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":216,"skipped":3619,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:29:50.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-1f72c2e7-ddbe-48ea-8095-99fe5cd3c874 in namespace container-probe-4029
Jan 20 22:29:56.934: INFO: Started pod busybox-1f72c2e7-ddbe-48ea-8095-99fe5cd3c874 in namespace container-probe-4029
STEP: checking the pod's current state and verifying that restartCount is present
Jan 20 22:29:56.941: INFO: Initial restart count of pod busybox-1f72c2e7-ddbe-48ea-8095-99fe5cd3c874 is 0
Jan 20 22:30:49.217: INFO: Restart count of pod container-probe-4029/busybox-1f72c2e7-ddbe-48ea-8095-99fe5cd3c874 is now 1 (52.276460995s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:30:49.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4029" for this suite.

• [SLOW TEST:58.517 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3642,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:30:49.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Jan 20 22:30:49.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2851'
Jan 20 22:30:49.950: INFO: stderr: ""
Jan 20 22:30:49.951: INFO: stdout: "pod/pause created\n"
Jan 20 22:30:49.952: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 20 22:30:49.952: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2851" to be "running and ready"
Jan 20 22:30:50.088: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 135.685648ms
Jan 20 22:30:52.099: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146782793s
Jan 20 22:30:54.105: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152320443s
Jan 20 22:30:56.115: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163117737s
Jan 20 22:30:58.125: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172544845s
Jan 20 22:31:00.132: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.18018005s
Jan 20 22:31:00.133: INFO: Pod "pause" satisfied condition "running and ready"
Jan 20 22:31:00.133: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 20 22:31:00.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2851'
Jan 20 22:31:00.342: INFO: stderr: ""
Jan 20 22:31:00.342: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 20 22:31:00.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2851'
Jan 20 22:31:00.552: INFO: stderr: ""
Jan 20 22:31:00.552: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 20 22:31:00.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2851'
Jan 20 22:31:00.699: INFO: stderr: ""
Jan 20 22:31:00.700: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 20 22:31:00.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2851'
Jan 20 22:31:00.837: INFO: stderr: ""
Jan 20 22:31:00.837: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Jan 20 22:31:00.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2851'
Jan 20 22:31:00.983: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 22:31:00.983: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 20 22:31:00.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2851'
Jan 20 22:31:01.204: INFO: stderr: "No resources found in kubectl-2851 namespace.\n"
Jan 20 22:31:01.205: INFO: stdout: ""
Jan 20 22:31:01.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2851 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 22:31:01.355: INFO: stderr: ""
Jan 20 22:31:01.355: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:31:01.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2851" for this suite.

• [SLOW TEST:12.090 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":218,"skipped":3646,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:31:01.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-2123
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2123 to expose endpoints map[]
Jan 20 22:31:01.526: INFO: successfully validated that service endpoint-test2 in namespace services-2123 exposes endpoints map[] (7.691728ms elapsed)
STEP: Creating pod pod1 in namespace services-2123
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2123 to expose endpoints map[pod1:[80]]
Jan 20 22:31:05.757: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.082463363s elapsed, will retry)
Jan 20 22:31:10.863: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.187991106s elapsed, will retry)
Jan 20 22:31:11.887: INFO: successfully validated that service endpoint-test2 in namespace services-2123 exposes endpoints map[pod1:[80]] (10.212184308s elapsed)
STEP: Creating pod pod2 in namespace services-2123
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2123 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 20 22:31:16.798: INFO: Unexpected endpoints: found map[ec521992-5d0d-470b-b9af-27568ed3033d:[80]], expected map[pod1:[80] pod2:[80]] (4.905348316s elapsed, will retry)
Jan 20 22:31:18.878: INFO: successfully validated that service endpoint-test2 in namespace services-2123 exposes endpoints map[pod1:[80] pod2:[80]] (6.985281242s elapsed)
STEP: Deleting pod pod1 in namespace services-2123
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2123 to expose endpoints map[pod2:[80]]
Jan 20 22:31:19.914: INFO: successfully validated that service endpoint-test2 in namespace services-2123 exposes endpoints map[pod2:[80]] (1.029264251s elapsed)
STEP: Deleting pod pod2 in namespace services-2123
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2123 to expose endpoints map[]
Jan 20 22:31:22.065: INFO: successfully validated that service endpoint-test2 in namespace services-2123 exposes endpoints map[] (2.141915866s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:31:22.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2123" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.089 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":219,"skipped":3698,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:31:22.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 20 22:31:22.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10" in namespace "projected-2058" to be "success or failure"
Jan 20 22:31:22.714: INFO: Pod "downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10": Phase="Pending", Reason="", readiness=false. Elapsed: 54.602516ms
Jan 20 22:31:24.771: INFO: Pod "downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111532379s
Jan 20 22:31:26.778: INFO: Pod "downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118588646s
Jan 20 22:31:28.785: INFO: Pod "downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126071613s
Jan 20 22:31:30.797: INFO: Pod "downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137989876s
STEP: Saw pod success
Jan 20 22:31:30.798: INFO: Pod "downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10" satisfied condition "success or failure"
Jan 20 22:31:30.801: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10 container client-container: 
STEP: delete the pod
Jan 20 22:31:30.871: INFO: Waiting for pod downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10 to disappear
Jan 20 22:31:30.884: INFO: Pod downwardapi-volume-329a6a1a-e023-485f-8584-a753c8dc2d10 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:31:30.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2058" for this suite.

• [SLOW TEST:8.467 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3698,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:31:30.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-wstvh in namespace proxy-7592
I0120 22:31:31.226080       9 runners.go:189] Created replication controller with name: proxy-service-wstvh, namespace: proxy-7592, replica count: 1
I0120 22:31:32.278580       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:31:33.280249       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:31:34.280923       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:31:35.282346       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:31:36.283230       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:31:37.284088       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:31:38.285055       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:31:39.285664       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 22:31:40.286275       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 22:31:41.287099       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 22:31:42.287777       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 22:31:43.288436       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 22:31:44.289018       9 runners.go:189] proxy-service-wstvh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 20 22:31:44.295: INFO: setup took 13.170440953s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 20 22:31:44.326: INFO: (0) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 29.815573ms)
Jan 20 22:31:44.326: INFO: (0) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 29.863153ms)
Jan 20 22:31:44.327: INFO: (0) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 31.300041ms)
Jan 20 22:31:44.336: INFO: (0) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 39.386527ms)
Jan 20 22:31:44.336: INFO: (0) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 40.207471ms)
Jan 20 22:31:44.336: INFO: (0) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 40.355684ms)
Jan 20 22:31:44.337: INFO: (0) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 40.865734ms)
Jan 20 22:31:44.337: INFO: (0) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 40.786997ms)
Jan 20 22:31:44.337: INFO: (0) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 41.31624ms)
Jan 20 22:31:44.339: INFO: (0) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 43.012176ms)
Jan 20 22:31:44.339: INFO: (0) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 42.726388ms)
Jan 20 22:31:44.340: INFO: (0) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 44.052913ms)
Jan 20 22:31:44.340: INFO: (0) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 44.543365ms)
Jan 20 22:31:44.341: INFO: (0) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test<... (200; 19.330908ms)
Jan 20 22:31:44.372: INFO: (1) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 19.371279ms)
Jan 20 22:31:44.372: INFO: (1) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 19.095206ms)
Jan 20 22:31:44.372: INFO: (1) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test (200; 19.526531ms)
Jan 20 22:31:44.374: INFO: (1) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 21.21968ms)
Jan 20 22:31:44.381: INFO: (2) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 7.006277ms)
Jan 20 22:31:44.382: INFO: (2) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 7.339068ms)
Jan 20 22:31:44.382: INFO: (2) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 8.302421ms)
Jan 20 22:31:44.384: INFO: (2) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 10.025076ms)
Jan 20 22:31:44.385: INFO: (2) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 10.499489ms)
Jan 20 22:31:44.385: INFO: (2) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 10.616437ms)
Jan 20 22:31:44.385: INFO: (2) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 10.79668ms)
Jan 20 22:31:44.387: INFO: (2) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test (200; 13.326156ms)
Jan 20 22:31:44.390: INFO: (2) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 15.362273ms)
Jan 20 22:31:44.393: INFO: (2) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 18.486028ms)
Jan 20 22:31:44.393: INFO: (2) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 19.097847ms)
Jan 20 22:31:44.394: INFO: (2) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 19.863631ms)
Jan 20 22:31:44.395: INFO: (2) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 20.753939ms)
Jan 20 22:31:44.397: INFO: (2) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 22.67944ms)
Jan 20 22:31:44.397: INFO: (2) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 22.796114ms)
Jan 20 22:31:44.410: INFO: (3) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 11.853533ms)
Jan 20 22:31:44.410: INFO: (3) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 12.019381ms)
Jan 20 22:31:44.410: INFO: (3) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 12.294327ms)
Jan 20 22:31:44.410: INFO: (3) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 12.328717ms)
Jan 20 22:31:44.410: INFO: (3) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 12.765143ms)
Jan 20 22:31:44.411: INFO: (3) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 12.893103ms)
Jan 20 22:31:44.411: INFO: (3) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 13.522068ms)
Jan 20 22:31:44.412: INFO: (3) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 13.738611ms)
Jan 20 22:31:44.412: INFO: (3) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 14.053117ms)
Jan 20 22:31:44.412: INFO: (3) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: ... (200; 19.399132ms)
Jan 20 22:31:44.441: INFO: (4) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 20.962252ms)
Jan 20 22:31:44.441: INFO: (4) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 20.944278ms)
Jan 20 22:31:44.441: INFO: (4) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test (200; 21.846673ms)
Jan 20 22:31:44.442: INFO: (4) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 22.006488ms)
Jan 20 22:31:44.442: INFO: (4) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 22.23479ms)
Jan 20 22:31:44.442: INFO: (4) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 22.346986ms)
Jan 20 22:31:44.442: INFO: (4) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 22.638922ms)
Jan 20 22:31:44.443: INFO: (4) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 22.861592ms)
Jan 20 22:31:44.443: INFO: (4) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 23.418132ms)
Jan 20 22:31:44.455: INFO: (5) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 11.315173ms)
Jan 20 22:31:44.455: INFO: (5) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 11.404577ms)
Jan 20 22:31:44.455: INFO: (5) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 11.49204ms)
Jan 20 22:31:44.456: INFO: (5) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 12.534532ms)
Jan 20 22:31:44.456: INFO: (5) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 12.694173ms)
Jan 20 22:31:44.457: INFO: (5) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 12.778359ms)
Jan 20 22:31:44.457: INFO: (5) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 13.304888ms)
Jan 20 22:31:44.457: INFO: (5) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 13.191435ms)
Jan 20 22:31:44.457: INFO: (5) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 13.092991ms)
Jan 20 22:31:44.457: INFO: (5) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test<... (200; 6.953051ms)
Jan 20 22:31:44.471: INFO: (6) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: ... (200; 10.655408ms)
Jan 20 22:31:44.473: INFO: (6) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 10.75069ms)
Jan 20 22:31:44.473: INFO: (6) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 11.244137ms)
Jan 20 22:31:44.474: INFO: (6) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 12.177266ms)
Jan 20 22:31:44.478: INFO: (6) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 16.480052ms)
Jan 20 22:31:44.479: INFO: (6) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 16.497105ms)
Jan 20 22:31:44.479: INFO: (6) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 16.547655ms)
Jan 20 22:31:44.479: INFO: (6) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 16.84541ms)
Jan 20 22:31:44.482: INFO: (6) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 19.764521ms)
Jan 20 22:31:44.483: INFO: (6) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 20.065188ms)
Jan 20 22:31:44.492: INFO: (7) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 8.894899ms)
Jan 20 22:31:44.504: INFO: (7) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test (200; 24.715684ms)
Jan 20 22:31:44.508: INFO: (7) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 25.042633ms)
Jan 20 22:31:44.512: INFO: (7) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 28.605292ms)
Jan 20 22:31:44.513: INFO: (7) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 30.665587ms)
Jan 20 22:31:44.514: INFO: (7) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 30.939592ms)
Jan 20 22:31:44.519: INFO: (7) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 36.086148ms)
Jan 20 22:31:44.521: INFO: (7) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 37.93333ms)
Jan 20 22:31:44.521: INFO: (7) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 38.569698ms)
Jan 20 22:31:44.522: INFO: (7) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 38.527885ms)
Jan 20 22:31:44.522: INFO: (7) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 38.673588ms)
Jan 20 22:31:44.522: INFO: (7) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 38.819057ms)
Jan 20 22:31:44.522: INFO: (7) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 38.764157ms)
Jan 20 22:31:44.522: INFO: (7) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 39.074606ms)
Jan 20 22:31:44.537: INFO: (8) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 12.047087ms)
Jan 20 22:31:44.537: INFO: (8) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 12.483692ms)
Jan 20 22:31:44.537: INFO: (8) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 13.323328ms)
Jan 20 22:31:44.537: INFO: (8) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 13.089864ms)
Jan 20 22:31:44.538: INFO: (8) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: ... (200; 15.361977ms)
Jan 20 22:31:44.541: INFO: (8) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 18.684289ms)
Jan 20 22:31:44.541: INFO: (8) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 17.232759ms)
Jan 20 22:31:44.543: INFO: (8) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 18.124029ms)
Jan 20 22:31:44.544: INFO: (8) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 19.003858ms)
Jan 20 22:31:44.544: INFO: (8) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 18.347642ms)
Jan 20 22:31:44.545: INFO: (8) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 20.058425ms)
Jan 20 22:31:44.549: INFO: (9) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 3.724323ms)
Jan 20 22:31:44.551: INFO: (9) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: ... (200; 6.990207ms)
Jan 20 22:31:44.553: INFO: (9) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 7.018737ms)
Jan 20 22:31:44.553: INFO: (9) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 7.720378ms)
Jan 20 22:31:44.553: INFO: (9) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 7.562912ms)
Jan 20 22:31:44.557: INFO: (9) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 11.241046ms)
Jan 20 22:31:44.558: INFO: (9) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 11.768797ms)
Jan 20 22:31:44.559: INFO: (9) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 12.943791ms)
Jan 20 22:31:44.559: INFO: (9) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 13.311942ms)
Jan 20 22:31:44.561: INFO: (9) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 15.489481ms)
Jan 20 22:31:44.561: INFO: (9) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 15.058026ms)
Jan 20 22:31:44.561: INFO: (9) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 15.294949ms)
Jan 20 22:31:44.562: INFO: (9) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 16.784383ms)
Jan 20 22:31:44.562: INFO: (9) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 16.610586ms)
Jan 20 22:31:44.576: INFO: (10) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test (200; 14.766875ms)
Jan 20 22:31:44.579: INFO: (10) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 15.930567ms)
Jan 20 22:31:44.579: INFO: (10) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 16.118663ms)
Jan 20 22:31:44.579: INFO: (10) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 16.383926ms)
Jan 20 22:31:44.579: INFO: (10) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 16.048754ms)
Jan 20 22:31:44.579: INFO: (10) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 16.102722ms)
Jan 20 22:31:44.580: INFO: (10) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 16.608104ms)
Jan 20 22:31:44.580: INFO: (10) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 17.048208ms)
Jan 20 22:31:44.580: INFO: (10) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 17.288053ms)
Jan 20 22:31:44.580: INFO: (10) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 17.861886ms)
Jan 20 22:31:44.581: INFO: (10) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 18.299544ms)
Jan 20 22:31:44.581: INFO: (10) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 18.290272ms)
Jan 20 22:31:44.581: INFO: (10) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 18.17378ms)
Jan 20 22:31:44.603: INFO: (11) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 20.179507ms)
Jan 20 22:31:44.603: INFO: (11) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 20.66865ms)
Jan 20 22:31:44.603: INFO: (11) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 20.485415ms)
Jan 20 22:31:44.603: INFO: (11) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 19.61731ms)
Jan 20 22:31:44.603: INFO: (11) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 20.87046ms)
Jan 20 22:31:44.604: INFO: (11) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 20.808736ms)
Jan 20 22:31:44.604: INFO: (11) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 21.060604ms)
Jan 20 22:31:44.605: INFO: (11) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test (200; 9.155906ms)
Jan 20 22:31:44.624: INFO: (12) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 12.005861ms)
Jan 20 22:31:44.624: INFO: (12) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 12.5309ms)
Jan 20 22:31:44.624: INFO: (12) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 12.122034ms)
Jan 20 22:31:44.624: INFO: (12) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 12.352696ms)
Jan 20 22:31:44.624: INFO: (12) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 12.549521ms)
Jan 20 22:31:44.624: INFO: (12) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 12.541087ms)
Jan 20 22:31:44.624: INFO: (12) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: ... (200; 15.39459ms)
Jan 20 22:31:44.646: INFO: (13) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 17.250933ms)
Jan 20 22:31:44.646: INFO: (13) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 17.726218ms)
Jan 20 22:31:44.646: INFO: (13) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 17.396116ms)
Jan 20 22:31:44.646: INFO: (13) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 17.602861ms)
Jan 20 22:31:44.646: INFO: (13) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 17.666653ms)
Jan 20 22:31:44.647: INFO: (13) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: ... (200; 13.969282ms)
Jan 20 22:31:44.662: INFO: (14) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 14.149805ms)
Jan 20 22:31:44.662: INFO: (14) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 14.298292ms)
Jan 20 22:31:44.662: INFO: (14) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 14.543565ms)
Jan 20 22:31:44.663: INFO: (14) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 15.187014ms)
Jan 20 22:31:44.663: INFO: (14) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 15.270807ms)
Jan 20 22:31:44.663: INFO: (14) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 15.505923ms)
Jan 20 22:31:44.663: INFO: (14) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 15.127295ms)
Jan 20 22:31:44.664: INFO: (14) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 16.528535ms)
Jan 20 22:31:44.664: INFO: (14) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 16.368061ms)
Jan 20 22:31:44.664: INFO: (14) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 17.052644ms)
Jan 20 22:31:44.665: INFO: (14) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test<... (200; 14.680802ms)
Jan 20 22:31:44.683: INFO: (15) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 14.748715ms)
Jan 20 22:31:44.686: INFO: (15) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 18.320274ms)
Jan 20 22:31:44.686: INFO: (15) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: ... (200; 20.114752ms)
Jan 20 22:31:44.688: INFO: (15) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 20.892418ms)
Jan 20 22:31:44.688: INFO: (15) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 20.653542ms)
Jan 20 22:31:44.718: INFO: (16) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 28.531461ms)
Jan 20 22:31:44.718: INFO: (16) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 29.50244ms)
Jan 20 22:31:44.718: INFO: (16) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 29.133763ms)
Jan 20 22:31:44.718: INFO: (16) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test<... (200; 28.990442ms)
Jan 20 22:31:44.719: INFO: (16) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 29.993903ms)
Jan 20 22:31:44.719: INFO: (16) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 30.473143ms)
Jan 20 22:31:44.719: INFO: (16) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 30.225133ms)
Jan 20 22:31:44.721: INFO: (16) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 32.197153ms)
Jan 20 22:31:44.721: INFO: (16) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 32.819646ms)
Jan 20 22:31:44.722: INFO: (16) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 32.550377ms)
Jan 20 22:31:44.722: INFO: (16) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 32.830332ms)
Jan 20 22:31:44.722: INFO: (16) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 32.976216ms)
Jan 20 22:31:44.722: INFO: (16) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 32.897275ms)
Jan 20 22:31:44.722: INFO: (16) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 32.916687ms)
Jan 20 22:31:44.722: INFO: (16) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 33.246946ms)
Jan 20 22:31:44.728: INFO: (17) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 5.913317ms)
Jan 20 22:31:44.728: INFO: (17) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 5.449506ms)
Jan 20 22:31:44.733: INFO: (17) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 9.865914ms)
Jan 20 22:31:44.733: INFO: (17) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 9.813728ms)
Jan 20 22:31:44.734: INFO: (17) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 11.687784ms)
Jan 20 22:31:44.734: INFO: (17) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 11.488465ms)
Jan 20 22:31:44.734: INFO: (17) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 11.95965ms)
Jan 20 22:31:44.735: INFO: (17) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 12.415799ms)
Jan 20 22:31:44.735: INFO: (17) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 12.164441ms)
Jan 20 22:31:44.735: INFO: (17) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 12.253348ms)
Jan 20 22:31:44.735: INFO: (17) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 12.341128ms)
Jan 20 22:31:44.735: INFO: (17) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 12.714307ms)
Jan 20 22:31:44.737: INFO: (17) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 14.191655ms)
Jan 20 22:31:44.737: INFO: (17) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test<... (200; 9.263467ms)
Jan 20 22:31:44.749: INFO: (18) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 9.51328ms)
Jan 20 22:31:44.749: INFO: (18) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 9.293161ms)
Jan 20 22:31:44.749: INFO: (18) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 9.508418ms)
Jan 20 22:31:44.750: INFO: (18) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq/proxy/: test (200; 10.517693ms)
Jan 20 22:31:44.750: INFO: (18) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 10.703484ms)
Jan 20 22:31:44.751: INFO: (18) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 11.354537ms)
Jan 20 22:31:44.751: INFO: (18) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 11.788528ms)
Jan 20 22:31:44.751: INFO: (18) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: ... (200; 13.094381ms)
Jan 20 22:31:44.753: INFO: (18) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 13.730763ms)
Jan 20 22:31:44.753: INFO: (18) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 13.807496ms)
Jan 20 22:31:44.754: INFO: (18) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 14.538118ms)
Jan 20 22:31:44.754: INFO: (18) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 14.666615ms)
Jan 20 22:31:44.756: INFO: (18) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 16.62284ms)
Jan 20 22:31:44.768: INFO: (19) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:443/proxy/: test (200; 13.754802ms)
Jan 20 22:31:44.770: INFO: (19) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:1080/proxy/: test<... (200; 13.946873ms)
Jan 20 22:31:44.771: INFO: (19) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:460/proxy/: tls baz (200; 14.454599ms)
Jan 20 22:31:44.771: INFO: (19) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:1080/proxy/: ... (200; 14.755785ms)
Jan 20 22:31:44.779: INFO: (19) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:162/proxy/: bar (200; 22.360468ms)
Jan 20 22:31:44.779: INFO: (19) /api/v1/namespaces/proxy-7592/pods/https:proxy-service-wstvh-qb5tq:462/proxy/: tls qux (200; 22.469878ms)
Jan 20 22:31:44.781: INFO: (19) /api/v1/namespaces/proxy-7592/pods/http:proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 23.97073ms)
Jan 20 22:31:44.781: INFO: (19) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname2/proxy/: bar (200; 24.032001ms)
Jan 20 22:31:44.781: INFO: (19) /api/v1/namespaces/proxy-7592/pods/proxy-service-wstvh-qb5tq:160/proxy/: foo (200; 24.557942ms)
Jan 20 22:31:44.783: INFO: (19) /api/v1/namespaces/proxy-7592/services/proxy-service-wstvh:portname1/proxy/: foo (200; 27.023883ms)
Jan 20 22:31:44.811: INFO: (19) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname2/proxy/: bar (200; 54.812214ms)
Jan 20 22:31:44.812: INFO: (19) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname2/proxy/: tls qux (200; 55.375469ms)
Jan 20 22:31:44.813: INFO: (19) /api/v1/namespaces/proxy-7592/services/http:proxy-service-wstvh:portname1/proxy/: foo (200; 56.263922ms)
Jan 20 22:31:44.814: INFO: (19) /api/v1/namespaces/proxy-7592/services/https:proxy-service-wstvh:tlsportname1/proxy/: tls baz (200; 57.969969ms)
STEP: deleting ReplicationController proxy-service-wstvh in namespace proxy-7592, will wait for the garbage collector to delete the pods
Jan 20 22:31:44.883: INFO: Deleting ReplicationController proxy-service-wstvh took: 9.06331ms
Jan 20 22:31:45.184: INFO: Terminating ReplicationController proxy-service-wstvh pods took: 300.8091ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:31:52.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7592" for this suite.

• [SLOW TEST:21.472 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":221,"skipped":3706,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:31:52.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 20 22:31:53.030: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 20 22:31:55.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:31:57.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:31:59.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156313, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 20 22:32:02.173: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:32:02.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7593" for this suite.
STEP: Destroying namespace "webhook-7593-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.211 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":222,"skipped":3711,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:32:02.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:32:02.668: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:32:03.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6937" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":223,"skipped":3717,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:32:03.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:32:14.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2678" for this suite.

• [SLOW TEST:11.332 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":224,"skipped":3726,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:32:14.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:32:23.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1430" for this suite.

• [SLOW TEST:8.173 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":225,"skipped":3749,"failed":0}
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:32:23.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 20 22:32:23.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-a 68a911b9-1196-46d4-8ea2-737c193d00b5 3269610 0 2020-01-20 22:32:23 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 22:32:23.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-a 68a911b9-1196-46d4-8ea2-737c193d00b5 3269610 0 2020-01-20 22:32:23 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 20 22:32:33.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-a 68a911b9-1196-46d4-8ea2-737c193d00b5 3269644 0 2020-01-20 22:32:23 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 20 22:32:33.247: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-a 68a911b9-1196-46d4-8ea2-737c193d00b5 3269644 0 2020-01-20 22:32:23 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 20 22:32:43.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-a 68a911b9-1196-46d4-8ea2-737c193d00b5 3269668 0 2020-01-20 22:32:23 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 22:32:43.325: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-a 68a911b9-1196-46d4-8ea2-737c193d00b5 3269668 0 2020-01-20 22:32:23 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 20 22:32:53.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-a 68a911b9-1196-46d4-8ea2-737c193d00b5 3269690 0 2020-01-20 22:32:23 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 22:32:53.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-a 68a911b9-1196-46d4-8ea2-737c193d00b5 3269690 0 2020-01-20 22:32:23 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 20 22:33:03.354: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-b 2e3cfa3e-6879-40db-92d4-42727de2e363 3269714 0 2020-01-20 22:33:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 22:33:03.354: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-b 2e3cfa3e-6879-40db-92d4-42727de2e363 3269714 0 2020-01-20 22:33:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 20 22:33:13.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-b 2e3cfa3e-6879-40db-92d4-42727de2e363 3269739 0 2020-01-20 22:33:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 22:33:13.369: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6847 /api/v1/namespaces/watch-6847/configmaps/e2e-watch-test-configmap-b 2e3cfa3e-6879-40db-92d4-42727de2e363 3269739 0 2020-01-20 22:33:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:33:23.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6847" for this suite.

• [SLOW TEST:60.263 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":226,"skipped":3749,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:33:23.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:33:23.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 20 22:33:23.695: INFO: stderr: ""
Jan 20 22:33:23.696: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:33:23.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6582" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":227,"skipped":3759,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:33:23.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 20 22:33:24.533: INFO: Pod name wrapped-volume-race-e94b2910-7570-4a41-bf90-6d078c1bff74: Found 0 pods out of 5
Jan 20 22:33:29.546: INFO: Pod name wrapped-volume-race-e94b2910-7570-4a41-bf90-6d078c1bff74: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e94b2910-7570-4a41-bf90-6d078c1bff74 in namespace emptydir-wrapper-8488, will wait for the garbage collector to delete the pods
Jan 20 22:33:55.697: INFO: Deleting ReplicationController wrapped-volume-race-e94b2910-7570-4a41-bf90-6d078c1bff74 took: 47.276825ms
Jan 20 22:33:56.098: INFO: Terminating ReplicationController wrapped-volume-race-e94b2910-7570-4a41-bf90-6d078c1bff74 pods took: 400.866424ms
STEP: Creating RC which spawns configmap-volume pods
Jan 20 22:34:08.577: INFO: Pod name wrapped-volume-race-650f2d67-2c89-4637-8ac2-1d80c8592784: Found 0 pods out of 5
Jan 20 22:34:13.600: INFO: Pod name wrapped-volume-race-650f2d67-2c89-4637-8ac2-1d80c8592784: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-650f2d67-2c89-4637-8ac2-1d80c8592784 in namespace emptydir-wrapper-8488, will wait for the garbage collector to delete the pods
Jan 20 22:34:45.741: INFO: Deleting ReplicationController wrapped-volume-race-650f2d67-2c89-4637-8ac2-1d80c8592784 took: 10.456134ms
Jan 20 22:34:46.142: INFO: Terminating ReplicationController wrapped-volume-race-650f2d67-2c89-4637-8ac2-1d80c8592784 pods took: 401.225524ms
STEP: Creating RC which spawns configmap-volume pods
Jan 20 22:35:03.242: INFO: Pod name wrapped-volume-race-16067d00-ec91-4d22-9d9e-68c7947f5974: Found 0 pods out of 5
Jan 20 22:35:08.261: INFO: Pod name wrapped-volume-race-16067d00-ec91-4d22-9d9e-68c7947f5974: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-16067d00-ec91-4d22-9d9e-68c7947f5974 in namespace emptydir-wrapper-8488, will wait for the garbage collector to delete the pods
Jan 20 22:35:36.493: INFO: Deleting ReplicationController wrapped-volume-race-16067d00-ec91-4d22-9d9e-68c7947f5974 took: 13.265906ms
Jan 20 22:35:37.094: INFO: Terminating ReplicationController wrapped-volume-race-16067d00-ec91-4d22-9d9e-68c7947f5974 pods took: 600.998846ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:35:50.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8488" for this suite.

• [SLOW TEST:146.770 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":228,"skipped":3768,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:35:50.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:35:50.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1502" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":229,"skipped":3792,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:35:50.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 20 22:35:50.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8" in namespace "downward-api-3507" to be "success or failure"
Jan 20 22:35:50.838: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490364ms
Jan 20 22:35:52.847: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01745492s
Jan 20 22:35:54.932: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102885062s
Jan 20 22:35:56.983: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1542118s
Jan 20 22:35:59.017: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187261154s
Jan 20 22:36:01.243: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.413770711s
Jan 20 22:36:03.254: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.424392543s
Jan 20 22:36:05.263: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.434247483s
STEP: Saw pod success
Jan 20 22:36:05.264: INFO: Pod "downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8" satisfied condition "success or failure"
Jan 20 22:36:05.269: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8 container client-container: 
STEP: delete the pod
Jan 20 22:36:05.340: INFO: Waiting for pod downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8 to disappear
Jan 20 22:36:05.457: INFO: Pod downwardapi-volume-c5c11b53-a337-4a1a-baee-6057aebf0aa8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:36:05.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3507" for this suite.

• [SLOW TEST:14.778 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3813,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:36:05.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:36:05.615: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:36:06.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3958" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":231,"skipped":3813,"failed":0}

------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:36:06.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 20 22:36:06.497: INFO: Waiting up to 5m0s for pod "downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce" in namespace "downward-api-1466" to be "success or failure"
Jan 20 22:36:06.511: INFO: Pod "downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 14.075996ms
Jan 20 22:36:08.526: INFO: Pod "downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029329325s
Jan 20 22:36:10.539: INFO: Pod "downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042121107s
Jan 20 22:36:12.554: INFO: Pod "downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057762898s
Jan 20 22:36:14.564: INFO: Pod "downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067463934s
Jan 20 22:36:16.580: INFO: Pod "downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083711627s
STEP: Saw pod success
Jan 20 22:36:16.581: INFO: Pod "downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce" satisfied condition "success or failure"
Jan 20 22:36:16.589: INFO: Trying to get logs from node jerma-node pod downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce container dapi-container: 
STEP: delete the pod
Jan 20 22:36:16.859: INFO: Waiting for pod downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce to disappear
Jan 20 22:36:16.898: INFO: Pod downward-api-99bfedfc-90fd-4df8-a6db-33397b11a7ce no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:36:16.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1466" for this suite.

• [SLOW TEST:10.517 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3813,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:36:16.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:36:17.083: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 20 22:36:22.090: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 20 22:36:24.103: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 20 22:36:26.112: INFO: Creating deployment "test-rollover-deployment"
Jan 20 22:36:26.125: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 20 22:36:28.159: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 20 22:36:28.178: INFO: Ensure that both replica sets have 1 created replica
Jan 20 22:36:28.188: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 20 22:36:28.227: INFO: Updating deployment test-rollover-deployment
Jan 20 22:36:28.227: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 20 22:36:30.363: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 20 22:36:30.380: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 20 22:36:30.399: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:30.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156588, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:32.836: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:32.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156588, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:34.413: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:34.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156588, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:36.413: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:36.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156588, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:38.432: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:38.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156597, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:40.416: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:40.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156597, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:42.419: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:42.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156597, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:44.412: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:44.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156597, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:46.412: INFO: all replica sets need to contain the pod-template-hash label
Jan 20 22:36:46.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156597, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715156586, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:36:48.991: INFO: 
Jan 20 22:36:48.991: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 20 22:36:49.006: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-4353 /apis/apps/v1/namespaces/deployment-4353/deployments/test-rollover-deployment 377420ee-5891-4548-831c-50db67bfde51 3271172 2 2020-01-20 22:36:26 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00325f0f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-20 22:36:26 +0000 UTC,LastTransitionTime:2020-01-20 22:36:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-20 22:36:47 +0000 UTC,LastTransitionTime:2020-01-20 22:36:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 20 22:36:49.014: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-4353 /apis/apps/v1/namespaces/deployment-4353/replicasets/test-rollover-deployment-574d6dfbff 63dc7092-7aad-4dda-b830-5cab1d8370dd 3271161 2 2020-01-20 22:36:28 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 377420ee-5891-4548-831c-50db67bfde51 0xc0043bd237 0xc0043bd238}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043bd2a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 20 22:36:49.014: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 20 22:36:49.014: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-4353 /apis/apps/v1/namespaces/deployment-4353/replicasets/test-rollover-controller a6ccaef7-758e-4f69-8c49-c7cf02a6b1b9 3271171 2 2020-01-20 22:36:17 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 377420ee-5891-4548-831c-50db67bfde51 0xc0043bd167 0xc0043bd168}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0043bd1c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 20 22:36:49.015: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-4353 /apis/apps/v1/namespaces/deployment-4353/replicasets/test-rollover-deployment-f6c94f66c 2bfaf2b8-3045-4819-9b44-3cac78cb21ee 3271108 2 2020-01-20 22:36:26 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 377420ee-5891-4548-831c-50db67bfde51 0xc0043bd310 0xc0043bd311}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043bd388  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 20 22:36:49.021: INFO: Pod "test-rollover-deployment-574d6dfbff-48s67" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-48s67 test-rollover-deployment-574d6dfbff- deployment-4353 /api/v1/namespaces/deployment-4353/pods/test-rollover-deployment-574d6dfbff-48s67 77011b6c-ce11-4960-a538-f1042129bd8e 3271137 0 2020-01-20 22:36:28 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 63dc7092-7aad-4dda-b830-5cab1d8370dd 0xc0043bd8b7 0xc0043bd8b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c596p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c596p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c596p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:36:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:36:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:36:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:36:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-20 22:36:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 22:36:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://9ca242f660bd164855f77b7d910011053e4b89578c65596a9d6b8cc1a2734671,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:36:49.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4353" for this suite.

• [SLOW TEST:32.127 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":233,"skipped":3836,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:36:49.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 20 22:36:49.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3314'
Jan 20 22:36:52.481: INFO: stderr: ""
Jan 20 22:36:52.482: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 22:36:52.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:36:52.821: INFO: stderr: ""
Jan 20 22:36:52.821: INFO: stdout: "update-demo-nautilus-n27sx update-demo-nautilus-ptgpw "
Jan 20 22:36:52.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n27sx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:36:52.988: INFO: stderr: ""
Jan 20 22:36:52.988: INFO: stdout: ""
Jan 20 22:36:52.988: INFO: update-demo-nautilus-n27sx is created but not running
Jan 20 22:36:57.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:36:58.719: INFO: stderr: ""
Jan 20 22:36:58.720: INFO: stdout: "update-demo-nautilus-n27sx update-demo-nautilus-ptgpw "
Jan 20 22:36:58.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n27sx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:36:59.347: INFO: stderr: ""
Jan 20 22:36:59.347: INFO: stdout: ""
Jan 20 22:36:59.347: INFO: update-demo-nautilus-n27sx is created but not running
Jan 20 22:37:04.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:37:04.520: INFO: stderr: ""
Jan 20 22:37:04.520: INFO: stdout: "update-demo-nautilus-n27sx update-demo-nautilus-ptgpw "
Jan 20 22:37:04.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n27sx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:04.694: INFO: stderr: ""
Jan 20 22:37:04.695: INFO: stdout: "true"
Jan 20 22:37:04.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n27sx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:04.841: INFO: stderr: ""
Jan 20 22:37:04.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 22:37:04.841: INFO: validating pod update-demo-nautilus-n27sx
Jan 20 22:37:04.847: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 22:37:04.847: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 22:37:04.847: INFO: update-demo-nautilus-n27sx is verified up and running
Jan 20 22:37:04.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ptgpw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:04.986: INFO: stderr: ""
Jan 20 22:37:04.987: INFO: stdout: "true"
Jan 20 22:37:04.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ptgpw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:05.132: INFO: stderr: ""
Jan 20 22:37:05.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 22:37:05.132: INFO: validating pod update-demo-nautilus-ptgpw
Jan 20 22:37:05.140: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 22:37:05.141: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 22:37:05.141: INFO: update-demo-nautilus-ptgpw is verified up and running
STEP: scaling down the replication controller
Jan 20 22:37:05.145: INFO: scanned /root for discovery docs: 
Jan 20 22:37:05.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3314'
Jan 20 22:37:06.331: INFO: stderr: ""
Jan 20 22:37:06.331: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 22:37:06.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:37:06.536: INFO: stderr: ""
Jan 20 22:37:06.536: INFO: stdout: "update-demo-nautilus-n27sx update-demo-nautilus-ptgpw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 20 22:37:11.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:37:11.692: INFO: stderr: ""
Jan 20 22:37:11.692: INFO: stdout: "update-demo-nautilus-n27sx update-demo-nautilus-ptgpw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 20 22:37:16.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:37:16.932: INFO: stderr: ""
Jan 20 22:37:16.932: INFO: stdout: "update-demo-nautilus-n27sx "
Jan 20 22:37:16.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n27sx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:17.085: INFO: stderr: ""
Jan 20 22:37:17.085: INFO: stdout: "true"
Jan 20 22:37:17.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n27sx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:17.209: INFO: stderr: ""
Jan 20 22:37:17.209: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 22:37:17.209: INFO: validating pod update-demo-nautilus-n27sx
Jan 20 22:37:17.215: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 22:37:17.215: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 22:37:17.215: INFO: update-demo-nautilus-n27sx is verified up and running
STEP: scaling up the replication controller
Jan 20 22:37:17.219: INFO: scanned /root for discovery docs: 
Jan 20 22:37:17.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3314'
Jan 20 22:37:18.426: INFO: stderr: ""
Jan 20 22:37:18.427: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 22:37:18.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:37:18.595: INFO: stderr: ""
Jan 20 22:37:18.596: INFO: stdout: "update-demo-nautilus-6rrct update-demo-nautilus-n27sx "
Jan 20 22:37:18.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6rrct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:18.717: INFO: stderr: ""
Jan 20 22:37:18.717: INFO: stdout: ""
Jan 20 22:37:18.717: INFO: update-demo-nautilus-6rrct is created but not running
Jan 20 22:37:23.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:37:23.917: INFO: stderr: ""
Jan 20 22:37:23.917: INFO: stdout: "update-demo-nautilus-6rrct update-demo-nautilus-n27sx "
Jan 20 22:37:23.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6rrct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:24.104: INFO: stderr: ""
Jan 20 22:37:24.104: INFO: stdout: ""
Jan 20 22:37:24.104: INFO: update-demo-nautilus-6rrct is created but not running
Jan 20 22:37:29.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3314'
Jan 20 22:37:29.297: INFO: stderr: ""
Jan 20 22:37:29.298: INFO: stdout: "update-demo-nautilus-6rrct update-demo-nautilus-n27sx "
Jan 20 22:37:29.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6rrct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:29.457: INFO: stderr: ""
Jan 20 22:37:29.457: INFO: stdout: "true"
Jan 20 22:37:29.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6rrct -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:29.595: INFO: stderr: ""
Jan 20 22:37:29.595: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 22:37:29.595: INFO: validating pod update-demo-nautilus-6rrct
Jan 20 22:37:29.603: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 22:37:29.603: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 22:37:29.603: INFO: update-demo-nautilus-6rrct is verified up and running
Jan 20 22:37:29.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n27sx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:29.744: INFO: stderr: ""
Jan 20 22:37:29.744: INFO: stdout: "true"
Jan 20 22:37:29.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n27sx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3314'
Jan 20 22:37:29.881: INFO: stderr: ""
Jan 20 22:37:29.881: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 22:37:29.881: INFO: validating pod update-demo-nautilus-n27sx
Jan 20 22:37:29.887: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 22:37:29.887: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 22:37:29.887: INFO: update-demo-nautilus-n27sx is verified up and running
STEP: using delete to clean up resources
Jan 20 22:37:29.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3314'
Jan 20 22:37:30.067: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 22:37:30.067: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 20 22:37:30.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3314'
Jan 20 22:37:30.246: INFO: stderr: "No resources found in kubectl-3314 namespace.\n"
Jan 20 22:37:30.246: INFO: stdout: ""
Jan 20 22:37:30.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3314 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 22:37:30.412: INFO: stderr: ""
Jan 20 22:37:30.412: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:37:30.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3314" for this suite.

• [SLOW TEST:41.390 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":234,"skipped":3872,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:37:30.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 20 22:37:42.058: INFO: &Pod{ObjectMeta:{send-events-45e5b4cd-dd80-4b58-acb8-097e2da01823  events-9542 /api/v1/namespaces/events-9542/pods/send-events-45e5b4cd-dd80-4b58-acb8-097e2da01823 c0e7874c-b911-4636-b88b-f5f796a2e1f0 3271417 0 2020-01-20 22:37:31 +0000 UTC   map[name:foo time:716933232] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kvhcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kvhcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kvhcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:37:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:37:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:37:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-20 22:37:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-20 22:37:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-20 22:37:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://d4d9ea5709cdfe020c7ab035f50679b14cd9833104c76008537577c37edc6f97,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 20 22:37:44.074: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 20 22:37:46.083: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:37:46.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9542" for this suite.

• [SLOW TEST:15.774 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":235,"skipped":3897,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:37:46.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Jan 20 22:37:56.452: INFO: Pod pod-hostip-e56756e4-b536-4398-a91b-1332e95d6414 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:37:56.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2365" for this suite.

• [SLOW TEST:10.263 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3899,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:37:56.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-3083/configmap-test-85962e4d-85b9-4701-b095-5ebd2f30c418
STEP: Creating a pod to test consume configMaps
Jan 20 22:37:56.708: INFO: Waiting up to 5m0s for pod "pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2" in namespace "configmap-3083" to be "success or failure"
Jan 20 22:37:56.716: INFO: Pod "pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.520534ms
Jan 20 22:37:58.723: INFO: Pod "pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014483417s
Jan 20 22:38:00.746: INFO: Pod "pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037325216s
Jan 20 22:38:02.754: INFO: Pod "pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045619383s
Jan 20 22:38:05.414: INFO: Pod "pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.706104839s
Jan 20 22:38:07.459: INFO: Pod "pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.751191714s
STEP: Saw pod success
Jan 20 22:38:07.460: INFO: Pod "pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2" satisfied condition "success or failure"
Jan 20 22:38:07.465: INFO: Trying to get logs from node jerma-node pod pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2 container env-test: 
STEP: delete the pod
Jan 20 22:38:07.694: INFO: Waiting for pod pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2 to disappear
Jan 20 22:38:07.699: INFO: Pod pod-configmaps-60c10378-688f-4df2-b370-a8fff70d6db2 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:38:07.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3083" for this suite.

• [SLOW TEST:11.240 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3929,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:38:07.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 20 22:38:17.045: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:38:17.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1799" for this suite.

• [SLOW TEST:9.488 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3953,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:38:17.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:38:17.396: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99" in namespace "security-context-test-7356" to be "success or failure"
Jan 20 22:38:17.417: INFO: Pod "busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99": Phase="Pending", Reason="", readiness=false. Elapsed: 21.182543ms
Jan 20 22:38:19.427: INFO: Pod "busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03094621s
Jan 20 22:38:21.433: INFO: Pod "busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037309385s
Jan 20 22:38:23.442: INFO: Pod "busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045421871s
Jan 20 22:38:25.449: INFO: Pod "busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053123891s
Jan 20 22:38:27.461: INFO: Pod "busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06441475s
Jan 20 22:38:27.461: INFO: Pod "busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:38:27.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7356" for this suite.

• [SLOW TEST:10.273 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3954,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:38:27.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 20 22:38:27.622: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 20 22:38:27.651: INFO: Waiting for terminating namespaces to be deleted...
Jan 20 22:38:27.656: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 20 22:38:27.670: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.670: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 20 22:38:27.670: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 20 22:38:27.670: INFO: 	Container weave ready: true, restart count 1
Jan 20 22:38:27.670: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 22:38:27.670: INFO: busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99 from security-context-test-7356 started at 2020-01-20 22:38:17 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.670: INFO: 	Container busybox-user-65534-4f987d65-715b-46ab-bdf2-1a4a2ba12d99 ready: false, restart count 0
Jan 20 22:38:27.670: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 20 22:38:27.695: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.695: INFO: 	Container coredns ready: true, restart count 0
Jan 20 22:38:27.695: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.695: INFO: 	Container coredns ready: true, restart count 0
Jan 20 22:38:27.695: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.695: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 20 22:38:27.695: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.695: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 20 22:38:27.695: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 20 22:38:27.695: INFO: 	Container weave ready: true, restart count 0
Jan 20 22:38:27.695: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 22:38:27.695: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.695: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 20 22:38:27.695: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.695: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 20 22:38:27.695: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 20 22:38:27.695: INFO: 	Container etcd ready: true, restart count 1
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1b3e0c72-ae03-4022-be0d-9cf4f1ae096c 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-1b3e0c72-ae03-4022-be0d-9cf4f1ae096c off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1b3e0c72-ae03-4022-be0d-9cf4f1ae096c
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:39:00.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9934" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:33.115 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":240,"skipped":3961,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:39:00.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-b6ffc320-0be8-43b2-a759-5ced2f732ac5
STEP: Creating a pod to test consume configMaps
Jan 20 22:39:00.714: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732" in namespace "configmap-9000" to be "success or failure"
Jan 20 22:39:00.758: INFO: Pod "pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732": Phase="Pending", Reason="", readiness=false. Elapsed: 43.648355ms
Jan 20 22:39:02.772: INFO: Pod "pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057233524s
Jan 20 22:39:04.780: INFO: Pod "pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065692243s
Jan 20 22:39:06.788: INFO: Pod "pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073296664s
Jan 20 22:39:08.809: INFO: Pod "pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094892589s
STEP: Saw pod success
Jan 20 22:39:08.810: INFO: Pod "pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732" satisfied condition "success or failure"
Jan 20 22:39:08.814: INFO: Trying to get logs from node jerma-node pod pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732 container configmap-volume-test: 
STEP: delete the pod
Jan 20 22:39:08.969: INFO: Waiting for pod pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732 to disappear
Jan 20 22:39:08.997: INFO: Pod pod-configmaps-ffc9594c-9297-4872-b8bf-7d8a7e8cb732 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:39:08.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9000" for this suite.

• [SLOW TEST:8.417 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3963,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:39:09.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 20 22:39:19.705: INFO: Successfully updated pod "annotationupdateb3915ad6-4768-4a7e-ad3f-5217ba9dacc0"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:39:21.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5972" for this suite.

• [SLOW TEST:12.890 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3977,"failed":0}
SSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:39:21.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:39:22.073: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4" in namespace "security-context-test-1147" to be "success or failure"
Jan 20 22:39:22.091: INFO: Pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.122261ms
Jan 20 22:39:24.097: INFO: Pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024049624s
Jan 20 22:39:26.107: INFO: Pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034132706s
Jan 20 22:39:28.115: INFO: Pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041302505s
Jan 20 22:39:30.122: INFO: Pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049081299s
Jan 20 22:39:32.142: INFO: Pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069012716s
Jan 20 22:39:34.148: INFO: Pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.074471043s
Jan 20 22:39:34.148: INFO: Pod "busybox-readonly-false-23c24425-ab9f-4559-991d-e594d61ed7f4" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:39:34.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1147" for this suite.

• [SLOW TEST:12.271 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3983,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:39:34.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-b0907aa8-3455-4d7c-8a79-f490a360a374
STEP: Creating a pod to test consume secrets
Jan 20 22:39:34.376: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4" in namespace "projected-687" to be "success or failure"
Jan 20 22:39:34.386: INFO: Pod "pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12048ms
Jan 20 22:39:36.455: INFO: Pod "pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078931889s
Jan 20 22:39:38.463: INFO: Pod "pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087014912s
Jan 20 22:39:40.482: INFO: Pod "pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105172462s
Jan 20 22:39:42.488: INFO: Pod "pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111514978s
STEP: Saw pod success
Jan 20 22:39:42.488: INFO: Pod "pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4" satisfied condition "success or failure"
Jan 20 22:39:42.490: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4 container secret-volume-test: 
STEP: delete the pod
Jan 20 22:39:42.598: INFO: Waiting for pod pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4 to disappear
Jan 20 22:39:42.607: INFO: Pod pod-projected-secrets-41950149-7afc-4209-ace6-88cccdba15b4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:39:42.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-687" for this suite.

• [SLOW TEST:8.447 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3998,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:39:42.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-e80a0110-45ea-4163-b9cc-02744f316334
STEP: Creating a pod to test consume configMaps
Jan 20 22:39:42.792: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc" in namespace "projected-2956" to be "success or failure"
Jan 20 22:39:42.804: INFO: Pod "pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.369858ms
Jan 20 22:39:44.826: INFO: Pod "pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034253292s
Jan 20 22:39:46.841: INFO: Pod "pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048721837s
Jan 20 22:39:48.856: INFO: Pod "pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064558613s
Jan 20 22:39:50.868: INFO: Pod "pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075789415s
Jan 20 22:39:52.938: INFO: Pod "pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146434681s
STEP: Saw pod success
Jan 20 22:39:52.938: INFO: Pod "pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc" satisfied condition "success or failure"
Jan 20 22:39:52.953: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 22:39:53.075: INFO: Waiting for pod pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc to disappear
Jan 20 22:39:53.085: INFO: Pod pod-projected-configmaps-f2cd01dd-62d9-4fa1-8b56-c8280aa12adc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:39:53.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2956" for this suite.

• [SLOW TEST:10.475 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4040,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:39:53.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:39:53.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:40:01.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1875" for this suite.

• [SLOW TEST:8.239 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4061,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:40:01.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:40:01.514: INFO: Create a RollingUpdate DaemonSet
Jan 20 22:40:01.520: INFO: Check that daemon pods launch on every node of the cluster
Jan 20 22:40:01.609: INFO: Number of nodes with available pods: 0
Jan 20 22:40:01.609: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:02.628: INFO: Number of nodes with available pods: 0
Jan 20 22:40:02.628: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:04.156: INFO: Number of nodes with available pods: 0
Jan 20 22:40:04.156: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:04.636: INFO: Number of nodes with available pods: 0
Jan 20 22:40:04.636: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:05.664: INFO: Number of nodes with available pods: 0
Jan 20 22:40:05.664: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:07.036: INFO: Number of nodes with available pods: 0
Jan 20 22:40:07.036: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:08.083: INFO: Number of nodes with available pods: 0
Jan 20 22:40:08.083: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:08.665: INFO: Number of nodes with available pods: 0
Jan 20 22:40:08.666: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:09.692: INFO: Number of nodes with available pods: 0
Jan 20 22:40:09.692: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:10.630: INFO: Number of nodes with available pods: 0
Jan 20 22:40:10.630: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:40:11.628: INFO: Number of nodes with available pods: 2
Jan 20 22:40:11.628: INFO: Number of running nodes: 2, number of available pods: 2
Jan 20 22:40:11.628: INFO: Update the DaemonSet to trigger a rollout
Jan 20 22:40:11.645: INFO: Updating DaemonSet daemon-set
Jan 20 22:40:17.689: INFO: Roll back the DaemonSet before rollout is complete
Jan 20 22:40:17.696: INFO: Updating DaemonSet daemon-set
Jan 20 22:40:17.696: INFO: Make sure DaemonSet rollback is complete
Jan 20 22:40:17.729: INFO: Wrong image for pod: daemon-set-fph6z. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 20 22:40:17.730: INFO: Pod daemon-set-fph6z is not available
Jan 20 22:40:18.914: INFO: Wrong image for pod: daemon-set-fph6z. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 20 22:40:18.914: INFO: Pod daemon-set-fph6z is not available
Jan 20 22:40:19.764: INFO: Wrong image for pod: daemon-set-fph6z. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 20 22:40:19.764: INFO: Pod daemon-set-fph6z is not available
Jan 20 22:40:20.747: INFO: Wrong image for pod: daemon-set-fph6z. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 20 22:40:20.747: INFO: Pod daemon-set-fph6z is not available
Jan 20 22:40:21.750: INFO: Wrong image for pod: daemon-set-fph6z. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 20 22:40:21.750: INFO: Pod daemon-set-fph6z is not available
Jan 20 22:40:22.747: INFO: Pod daemon-set-rwg45 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9758, will wait for the garbage collector to delete the pods
Jan 20 22:40:22.818: INFO: Deleting DaemonSet.extensions daemon-set took: 7.316347ms
Jan 20 22:40:23.719: INFO: Terminating DaemonSet.extensions daemon-set pods took: 901.097078ms
Jan 20 22:40:32.428: INFO: Number of nodes with available pods: 0
Jan 20 22:40:32.428: INFO: Number of running nodes: 0, number of available pods: 0
Jan 20 22:40:32.433: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9758/daemonsets","resourceVersion":"3272221"},"items":null}

Jan 20 22:40:32.439: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9758/pods","resourceVersion":"3272221"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:40:32.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9758" for this suite.

• [SLOW TEST:31.162 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":247,"skipped":4062,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:40:32.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:40:58.677: INFO: Container started at 2020-01-20 22:40:40 +0000 UTC, pod became ready at 2020-01-20 22:40:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:40:58.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3420" for this suite.

• [SLOW TEST:26.193 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4067,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:40:58.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-7914
STEP: creating replication controller nodeport-test in namespace services-7914
I0120 22:40:58.879330       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-7914, replica count: 2
I0120 22:41:01.931207       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:41:04.932083       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:41:07.932983       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:41:10.934699       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 22:41:13.936114       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 20 22:41:13.936: INFO: Creating new exec pod
Jan 20 22:41:20.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7914 execpodxvj5h -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 20 22:41:21.294: INFO: stderr: "I0120 22:41:21.142390    4247 log.go:172] (0xc00055a2c0) (0xc00069be00) Create stream\nI0120 22:41:21.142599    4247 log.go:172] (0xc00055a2c0) (0xc00069be00) Stream added, broadcasting: 1\nI0120 22:41:21.148377    4247 log.go:172] (0xc00055a2c0) Reply frame received for 1\nI0120 22:41:21.148481    4247 log.go:172] (0xc00055a2c0) (0xc0005a8780) Create stream\nI0120 22:41:21.148497    4247 log.go:172] (0xc00055a2c0) (0xc0005a8780) Stream added, broadcasting: 3\nI0120 22:41:21.150073    4247 log.go:172] (0xc00055a2c0) Reply frame received for 3\nI0120 22:41:21.150100    4247 log.go:172] (0xc00055a2c0) (0xc00069bea0) Create stream\nI0120 22:41:21.150108    4247 log.go:172] (0xc00055a2c0) (0xc00069bea0) Stream added, broadcasting: 5\nI0120 22:41:21.151475    4247 log.go:172] (0xc00055a2c0) Reply frame received for 5\nI0120 22:41:21.218485    4247 log.go:172] (0xc00055a2c0) Data frame received for 5\nI0120 22:41:21.218612    4247 log.go:172] (0xc00069bea0) (5) Data frame handling\nI0120 22:41:21.218635    4247 log.go:172] (0xc00069bea0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0120 22:41:21.226042    4247 log.go:172] (0xc00055a2c0) Data frame received for 5\nI0120 22:41:21.226103    4247 log.go:172] (0xc00069bea0) (5) Data frame handling\nI0120 22:41:21.226136    4247 log.go:172] (0xc00069bea0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0120 22:41:21.283710    4247 log.go:172] (0xc00055a2c0) Data frame received for 1\nI0120 22:41:21.283968    4247 log.go:172] (0xc00069be00) (1) Data frame handling\nI0120 22:41:21.284009    4247 log.go:172] (0xc00069be00) (1) Data frame sent\nI0120 22:41:21.284052    4247 log.go:172] (0xc00055a2c0) (0xc00069be00) Stream removed, broadcasting: 1\nI0120 22:41:21.284108    4247 log.go:172] (0xc00055a2c0) (0xc0005a8780) Stream removed, broadcasting: 3\nI0120 22:41:21.284185    4247 log.go:172] (0xc00055a2c0) (0xc00069bea0) Stream removed, broadcasting: 5\nI0120 22:41:21.284369    4247 log.go:172] (0xc00055a2c0) Go away received\nI0120 22:41:21.284986    4247 log.go:172] (0xc00055a2c0) (0xc00069be00) Stream removed, broadcasting: 1\nI0120 22:41:21.284999    4247 log.go:172] (0xc00055a2c0) (0xc0005a8780) Stream removed, broadcasting: 3\nI0120 22:41:21.285009    4247 log.go:172] (0xc00055a2c0) (0xc00069bea0) Stream removed, broadcasting: 5\n"
Jan 20 22:41:21.294: INFO: stdout: ""
Jan 20 22:41:21.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7914 execpodxvj5h -- /bin/sh -x -c nc -zv -t -w 2 10.96.228.166 80'
Jan 20 22:41:21.695: INFO: stderr: "I0120 22:41:21.508946    4269 log.go:172] (0xc000964d10) (0xc000a76320) Create stream\nI0120 22:41:21.509179    4269 log.go:172] (0xc000964d10) (0xc000a76320) Stream added, broadcasting: 1\nI0120 22:41:21.516453    4269 log.go:172] (0xc000964d10) Reply frame received for 1\nI0120 22:41:21.517190    4269 log.go:172] (0xc000964d10) (0xc00090c000) Create stream\nI0120 22:41:21.517355    4269 log.go:172] (0xc000964d10) (0xc00090c000) Stream added, broadcasting: 3\nI0120 22:41:21.521968    4269 log.go:172] (0xc000964d10) Reply frame received for 3\nI0120 22:41:21.522628    4269 log.go:172] (0xc000964d10) (0xc00090c0a0) Create stream\nI0120 22:41:21.522675    4269 log.go:172] (0xc000964d10) (0xc00090c0a0) Stream added, broadcasting: 5\nI0120 22:41:21.525245    4269 log.go:172] (0xc000964d10) Reply frame received for 5\nI0120 22:41:21.576838    4269 log.go:172] (0xc000964d10) Data frame received for 5\nI0120 22:41:21.576926    4269 log.go:172] (0xc00090c0a0) (5) Data frame handling\nI0120 22:41:21.576972    4269 log.go:172] (0xc00090c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.228.166 80\nI0120 22:41:21.583662    4269 log.go:172] (0xc000964d10) Data frame received for 5\nI0120 22:41:21.583781    4269 log.go:172] (0xc00090c0a0) (5) Data frame handling\nI0120 22:41:21.583807    4269 log.go:172] (0xc00090c0a0) (5) Data frame sent\nConnection to 10.96.228.166 80 port [tcp/http] succeeded!\nI0120 22:41:21.680821    4269 log.go:172] (0xc000964d10) Data frame received for 1\nI0120 22:41:21.680988    4269 log.go:172] (0xc000a76320) (1) Data frame handling\nI0120 22:41:21.681027    4269 log.go:172] (0xc000a76320) (1) Data frame sent\nI0120 22:41:21.681083    4269 log.go:172] (0xc000964d10) (0xc000a76320) Stream removed, broadcasting: 1\nI0120 22:41:21.681840    4269 log.go:172] (0xc000964d10) (0xc00090c000) Stream removed, broadcasting: 3\nI0120 22:41:21.681968    4269 log.go:172] (0xc000964d10) (0xc00090c0a0) Stream removed, broadcasting: 5\nI0120 22:41:21.682118    4269 log.go:172] (0xc000964d10) Go away received\nI0120 22:41:21.682721    4269 log.go:172] (0xc000964d10) (0xc000a76320) Stream removed, broadcasting: 1\nI0120 22:41:21.682766    4269 log.go:172] (0xc000964d10) (0xc00090c000) Stream removed, broadcasting: 3\nI0120 22:41:21.682785    4269 log.go:172] (0xc000964d10) (0xc00090c0a0) Stream removed, broadcasting: 5\n"
Jan 20 22:41:21.695: INFO: stdout: ""
Jan 20 22:41:21.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7914 execpodxvj5h -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30591'
Jan 20 22:41:22.013: INFO: stderr: "I0120 22:41:21.835090    4291 log.go:172] (0xc000ac36b0) (0xc000aa4820) Create stream\nI0120 22:41:21.835287    4291 log.go:172] (0xc000ac36b0) (0xc000aa4820) Stream added, broadcasting: 1\nI0120 22:41:21.841253    4291 log.go:172] (0xc000ac36b0) Reply frame received for 1\nI0120 22:41:21.841329    4291 log.go:172] (0xc000ac36b0) (0xc0005806e0) Create stream\nI0120 22:41:21.841340    4291 log.go:172] (0xc000ac36b0) (0xc0005806e0) Stream added, broadcasting: 3\nI0120 22:41:21.842222    4291 log.go:172] (0xc000ac36b0) Reply frame received for 3\nI0120 22:41:21.842247    4291 log.go:172] (0xc000ac36b0) (0xc00032d4a0) Create stream\nI0120 22:41:21.842253    4291 log.go:172] (0xc000ac36b0) (0xc00032d4a0) Stream added, broadcasting: 5\nI0120 22:41:21.843196    4291 log.go:172] (0xc000ac36b0) Reply frame received for 5\nI0120 22:41:21.932936    4291 log.go:172] (0xc000ac36b0) Data frame received for 5\nI0120 22:41:21.933033    4291 log.go:172] (0xc00032d4a0) (5) Data frame handling\nI0120 22:41:21.933064    4291 log.go:172] (0xc00032d4a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30591\nI0120 22:41:21.934830    4291 log.go:172] (0xc000ac36b0) Data frame received for 5\nI0120 22:41:21.934859    4291 log.go:172] (0xc00032d4a0) (5) Data frame handling\nI0120 22:41:21.934871    4291 log.go:172] (0xc00032d4a0) (5) Data frame sent\nConnection to 10.96.2.250 30591 port [tcp/30591] succeeded!\nI0120 22:41:22.001405    4291 log.go:172] (0xc000ac36b0) Data frame received for 1\nI0120 22:41:22.001566    4291 log.go:172] (0xc000aa4820) (1) Data frame handling\nI0120 22:41:22.001599    4291 log.go:172] (0xc000aa4820) (1) Data frame sent\nI0120 22:41:22.002092    4291 log.go:172] (0xc000ac36b0) (0xc000aa4820) Stream removed, broadcasting: 1\nI0120 22:41:22.002828    4291 log.go:172] (0xc000ac36b0) (0xc0005806e0) Stream removed, broadcasting: 3\nI0120 22:41:22.002963    4291 log.go:172] (0xc000ac36b0) (0xc00032d4a0) Stream removed, broadcasting: 5\nI0120 22:41:22.003045    4291 log.go:172] (0xc000ac36b0) Go away received\nI0120 22:41:22.003146    4291 log.go:172] (0xc000ac36b0) (0xc000aa4820) Stream removed, broadcasting: 1\nI0120 22:41:22.003173    4291 log.go:172] (0xc000ac36b0) (0xc0005806e0) Stream removed, broadcasting: 3\nI0120 22:41:22.003223    4291 log.go:172] (0xc000ac36b0) (0xc00032d4a0) Stream removed, broadcasting: 5\n"
Jan 20 22:41:22.013: INFO: stdout: ""
Jan 20 22:41:22.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7914 execpodxvj5h -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30591'
Jan 20 22:41:22.464: INFO: stderr: "I0120 22:41:22.244835    4311 log.go:172] (0xc000a19550) (0xc0009ce5a0) Create stream\nI0120 22:41:22.245304    4311 log.go:172] (0xc000a19550) (0xc0009ce5a0) Stream added, broadcasting: 1\nI0120 22:41:22.263264    4311 log.go:172] (0xc000a19550) Reply frame received for 1\nI0120 22:41:22.263417    4311 log.go:172] (0xc000a19550) (0xc000631ae0) Create stream\nI0120 22:41:22.263443    4311 log.go:172] (0xc000a19550) (0xc000631ae0) Stream added, broadcasting: 3\nI0120 22:41:22.265318    4311 log.go:172] (0xc000a19550) Reply frame received for 3\nI0120 22:41:22.265346    4311 log.go:172] (0xc000a19550) (0xc00059e6e0) Create stream\nI0120 22:41:22.265358    4311 log.go:172] (0xc000a19550) (0xc00059e6e0) Stream added, broadcasting: 5\nI0120 22:41:22.267332    4311 log.go:172] (0xc000a19550) Reply frame received for 5\nI0120 22:41:22.346750    4311 log.go:172] (0xc000a19550) Data frame received for 5\nI0120 22:41:22.346942    4311 log.go:172] (0xc00059e6e0) (5) Data frame handling\nI0120 22:41:22.346993    4311 log.go:172] (0xc00059e6e0) (5) Data frame sent\nI0120 22:41:22.347014    4311 log.go:172] (0xc000a19550) Data frame received for 5\nI0120 22:41:22.347019    4311 log.go:172] (0xc00059e6e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 30591\nConnection to 10.96.1.234 30591 port [tcp/30591] succeeded!\nI0120 22:41:22.347124    4311 log.go:172] (0xc00059e6e0) (5) Data frame sent\nI0120 22:41:22.436670    4311 log.go:172] (0xc000a19550) (0xc00059e6e0) Stream removed, broadcasting: 5\nI0120 22:41:22.436996    4311 log.go:172] (0xc000a19550) (0xc000631ae0) Stream removed, broadcasting: 3\nI0120 22:41:22.437260    4311 log.go:172] (0xc000a19550) Data frame received for 1\nI0120 22:41:22.437357    4311 log.go:172] (0xc0009ce5a0) (1) Data frame handling\nI0120 22:41:22.437415    4311 log.go:172] (0xc0009ce5a0) (1) Data frame sent\nI0120 22:41:22.437478    4311 log.go:172] (0xc000a19550) (0xc0009ce5a0) Stream removed, broadcasting: 1\nI0120 22:41:22.437545    4311 log.go:172] (0xc000a19550) Go away received\nI0120 22:41:22.440984    4311 log.go:172] (0xc000a19550) (0xc0009ce5a0) Stream removed, broadcasting: 1\nI0120 22:41:22.441134    4311 log.go:172] (0xc000a19550) (0xc000631ae0) Stream removed, broadcasting: 3\nI0120 22:41:22.441147    4311 log.go:172] (0xc000a19550) (0xc00059e6e0) Stream removed, broadcasting: 5\n"
Jan 20 22:41:22.464: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:41:22.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7914" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.930 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":249,"skipped":4068,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:41:22.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 20 22:41:22.858: INFO: Waiting up to 5m0s for pod "pod-281861a2-d268-47b5-83e8-34956bbd3308" in namespace "emptydir-5545" to be "success or failure"
Jan 20 22:41:22.873: INFO: Pod "pod-281861a2-d268-47b5-83e8-34956bbd3308": Phase="Pending", Reason="", readiness=false. Elapsed: 14.778184ms
Jan 20 22:41:24.881: INFO: Pod "pod-281861a2-d268-47b5-83e8-34956bbd3308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023329818s
Jan 20 22:41:26.903: INFO: Pod "pod-281861a2-d268-47b5-83e8-34956bbd3308": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045355896s
Jan 20 22:41:28.910: INFO: Pod "pod-281861a2-d268-47b5-83e8-34956bbd3308": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052174926s
Jan 20 22:41:31.939: INFO: Pod "pod-281861a2-d268-47b5-83e8-34956bbd3308": Phase="Pending", Reason="", readiness=false. Elapsed: 9.081085891s
Jan 20 22:41:34.336: INFO: Pod "pod-281861a2-d268-47b5-83e8-34956bbd3308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.47779899s
STEP: Saw pod success
Jan 20 22:41:34.336: INFO: Pod "pod-281861a2-d268-47b5-83e8-34956bbd3308" satisfied condition "success or failure"
Jan 20 22:41:34.344: INFO: Trying to get logs from node jerma-node pod pod-281861a2-d268-47b5-83e8-34956bbd3308 container test-container: 
STEP: delete the pod
Jan 20 22:41:35.137: INFO: Waiting for pod pod-281861a2-d268-47b5-83e8-34956bbd3308 to disappear
Jan 20 22:41:35.142: INFO: Pod pod-281861a2-d268-47b5-83e8-34956bbd3308 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:41:35.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5545" for this suite.

• [SLOW TEST:12.530 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4069,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:41:35.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 20 22:41:35.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8673'
Jan 20 22:41:35.531: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 20 22:41:35.531: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Jan 20 22:41:39.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8673'
Jan 20 22:41:39.778: INFO: stderr: ""
Jan 20 22:41:39.778: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:41:39.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8673" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":251,"skipped":4108,"failed":0}

------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:41:39.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 20 22:41:40.424: INFO: Number of nodes with available pods: 0
Jan 20 22:41:40.424: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:42.318: INFO: Number of nodes with available pods: 0
Jan 20 22:41:42.318: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:42.835: INFO: Number of nodes with available pods: 0
Jan 20 22:41:42.835: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:43.891: INFO: Number of nodes with available pods: 0
Jan 20 22:41:43.891: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:44.441: INFO: Number of nodes with available pods: 0
Jan 20 22:41:44.441: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:45.463: INFO: Number of nodes with available pods: 0
Jan 20 22:41:45.464: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:48.513: INFO: Number of nodes with available pods: 0
Jan 20 22:41:48.513: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:49.437: INFO: Number of nodes with available pods: 0
Jan 20 22:41:49.437: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:50.446: INFO: Number of nodes with available pods: 0
Jan 20 22:41:50.446: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:51.440: INFO: Number of nodes with available pods: 1
Jan 20 22:41:51.441: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:52.441: INFO: Number of nodes with available pods: 1
Jan 20 22:41:52.441: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:53.504: INFO: Number of nodes with available pods: 2
Jan 20 22:41:53.504: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 20 22:41:53.616: INFO: Number of nodes with available pods: 1
Jan 20 22:41:53.616: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:54.635: INFO: Number of nodes with available pods: 1
Jan 20 22:41:54.635: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:55.633: INFO: Number of nodes with available pods: 1
Jan 20 22:41:55.634: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:56.633: INFO: Number of nodes with available pods: 1
Jan 20 22:41:56.633: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:57.630: INFO: Number of nodes with available pods: 1
Jan 20 22:41:57.630: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:58.640: INFO: Number of nodes with available pods: 1
Jan 20 22:41:58.640: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:41:59.673: INFO: Number of nodes with available pods: 1
Jan 20 22:41:59.673: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:42:00.703: INFO: Number of nodes with available pods: 1
Jan 20 22:42:00.703: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:42:01.634: INFO: Number of nodes with available pods: 1
Jan 20 22:42:01.634: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:42:02.631: INFO: Number of nodes with available pods: 1
Jan 20 22:42:02.631: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:42:03.638: INFO: Number of nodes with available pods: 1
Jan 20 22:42:03.638: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:42:04.633: INFO: Number of nodes with available pods: 1
Jan 20 22:42:04.633: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:42:05.639: INFO: Number of nodes with available pods: 1
Jan 20 22:42:05.639: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:42:06.634: INFO: Number of nodes with available pods: 1
Jan 20 22:42:06.634: INFO: Node jerma-node is running more than one daemon pod
Jan 20 22:42:07.627: INFO: Number of nodes with available pods: 2
Jan 20 22:42:07.627: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1054, will wait for the garbage collector to delete the pods
Jan 20 22:42:07.698: INFO: Deleting DaemonSet.extensions daemon-set took: 9.543984ms
Jan 20 22:42:07.999: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.717502ms
Jan 20 22:42:14.719: INFO: Number of nodes with available pods: 0
Jan 20 22:42:14.720: INFO: Number of running nodes: 0, number of available pods: 0
Jan 20 22:42:14.724: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1054/daemonsets","resourceVersion":"3272681"},"items":null}

Jan 20 22:42:14.727: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1054/pods","resourceVersion":"3272681"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:42:14.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1054" for this suite.

• [SLOW TEST:34.935 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":252,"skipped":4108,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:42:14.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-3df77b03-5b2e-4371-aa9a-bbe3d9592713
STEP: Creating configMap with name cm-test-opt-upd-43b01297-359f-4914-844a-6d297fafa4f1
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3df77b03-5b2e-4371-aa9a-bbe3d9592713
STEP: Updating configmap cm-test-opt-upd-43b01297-359f-4914-844a-6d297fafa4f1
STEP: Creating configMap with name cm-test-opt-create-6a2d77ca-fd27-4fd9-bf35-9de4e22d5f75
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:43:33.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5425" for this suite.

• [SLOW TEST:79.246 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4117,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:43:34.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:43:34.118: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.105907ms)
Jan 20 22:43:34.124: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.358048ms)
Jan 20 22:43:34.127: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.066804ms)
Jan 20 22:43:34.131: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.832086ms)
Jan 20 22:43:34.134: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.497642ms)
Jan 20 22:43:34.139: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.463625ms)
Jan 20 22:43:34.145: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.860311ms)
Jan 20 22:43:34.150: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.333877ms)
Jan 20 22:43:34.154: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.172021ms)
Jan 20 22:43:34.186: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 31.971737ms)
Jan 20 22:43:34.191: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.058526ms)
Jan 20 22:43:34.197: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.547186ms)
Jan 20 22:43:34.206: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 9.305889ms)
Jan 20 22:43:34.210: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.81766ms)
Jan 20 22:43:34.214: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.448248ms)
Jan 20 22:43:34.218: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.851401ms)
Jan 20 22:43:34.221: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.346341ms)
Jan 20 22:43:34.225: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.355489ms)
Jan 20 22:43:34.230: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.169075ms)
Jan 20 22:43:34.233: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.50673ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:43:34.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5462" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":254,"skipped":4152,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:43:34.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-243a0dfc-a859-47bf-978f-4b8ca082188c
STEP: Creating a pod to test consume configMaps
Jan 20 22:43:34.429: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832" in namespace "projected-9898" to be "success or failure"
Jan 20 22:43:34.437: INFO: Pod "pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832": Phase="Pending", Reason="", readiness=false. Elapsed: 7.798895ms
Jan 20 22:43:36.446: INFO: Pod "pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016503161s
Jan 20 22:43:38.480: INFO: Pod "pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051037818s
Jan 20 22:43:40.493: INFO: Pod "pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063413422s
Jan 20 22:43:42.505: INFO: Pod "pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0755695s
Jan 20 22:43:44.515: INFO: Pod "pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086268054s
STEP: Saw pod success
Jan 20 22:43:44.516: INFO: Pod "pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832" satisfied condition "success or failure"
Jan 20 22:43:44.520: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 22:43:45.096: INFO: Waiting for pod pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832 to disappear
Jan 20 22:43:45.105: INFO: Pod pod-projected-configmaps-537612f2-52a0-470d-af0a-5ab47168c832 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:43:45.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9898" for this suite.

• [SLOW TEST:10.881 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4158,"failed":0}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:43:45.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 20 22:43:45.564: INFO: Waiting up to 5m0s for pod "downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8" in namespace "downward-api-9466" to be "success or failure"
Jan 20 22:43:45.689: INFO: Pod "downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 124.701917ms
Jan 20 22:43:47.697: INFO: Pod "downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133173426s
Jan 20 22:43:49.711: INFO: Pod "downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147230541s
Jan 20 22:43:51.727: INFO: Pod "downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16274873s
Jan 20 22:43:53.735: INFO: Pod "downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171345047s
STEP: Saw pod success
Jan 20 22:43:53.736: INFO: Pod "downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8" satisfied condition "success or failure"
Jan 20 22:43:53.740: INFO: Trying to get logs from node jerma-node pod downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8 container dapi-container: 
STEP: delete the pod
Jan 20 22:43:54.314: INFO: Waiting for pod downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8 to disappear
Jan 20 22:43:54.332: INFO: Pod downward-api-778a4c28-ba36-4ab9-8341-87bf0e1fe3b8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:43:54.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9466" for this suite.

• [SLOW TEST:9.225 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4163,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:43:54.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-57f91e44-46b6-413d-af27-92edaec488bc
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-57f91e44-46b6-413d-af27-92edaec488bc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:45:32.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5415" for this suite.

• [SLOW TEST:97.792 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4177,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:45:32.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9153 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9153;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9153 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9153;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9153.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9153.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9153.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9153.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9153.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9153.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9153.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9153.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9153.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9153.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 97.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.97_udp@PTR;check="$$(dig +tcp +noall +answer +search 97.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.97_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9153 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9153;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9153 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9153;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9153.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9153.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9153.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9153.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9153.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9153.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9153.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9153.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9153.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9153.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9153.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 97.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.97_udp@PTR;check="$$(dig +tcp +noall +answer +search 97.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.97_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 20 22:45:44.718: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.723: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.731: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.737: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.740: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.744: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.749: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.752: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.779: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.782: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.786: INFO: Unable to read jessie_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.788: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.792: INFO: Unable to read jessie_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.797: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.836: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:44.902: INFO: Lookups using dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9153 wheezy_tcp@dns-test-service.dns-9153 wheezy_udp@dns-test-service.dns-9153.svc wheezy_tcp@dns-test-service.dns-9153.svc wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9153 jessie_tcp@dns-test-service.dns-9153 jessie_udp@dns-test-service.dns-9153.svc jessie_tcp@dns-test-service.dns-9153.svc jessie_udp@_http._tcp.dns-test-service.dns-9153.svc jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc]

Jan 20 22:45:49.913: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.920: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.926: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.933: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.940: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.949: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.954: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.993: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:49.998: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:50.002: INFO: Unable to read jessie_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:50.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:50.009: INFO: Unable to read jessie_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:50.013: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:50.016: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:50.019: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:50.042: INFO: Lookups using dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9153 wheezy_tcp@dns-test-service.dns-9153 wheezy_udp@dns-test-service.dns-9153.svc wheezy_tcp@dns-test-service.dns-9153.svc wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9153 jessie_tcp@dns-test-service.dns-9153 jessie_udp@dns-test-service.dns-9153.svc jessie_tcp@dns-test-service.dns-9153.svc jessie_udp@_http._tcp.dns-test-service.dns-9153.svc jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc]

Jan 20 22:45:54.936: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:54.946: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:54.954: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:54.961: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:54.966: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:54.972: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:54.977: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:54.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.020: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.024: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.028: INFO: Unable to read jessie_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.031: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.035: INFO: Unable to read jessie_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.039: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.042: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.052: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:55.074: INFO: Lookups using dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9153 wheezy_tcp@dns-test-service.dns-9153 wheezy_udp@dns-test-service.dns-9153.svc wheezy_tcp@dns-test-service.dns-9153.svc wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9153 jessie_tcp@dns-test-service.dns-9153 jessie_udp@dns-test-service.dns-9153.svc jessie_tcp@dns-test-service.dns-9153.svc jessie_udp@_http._tcp.dns-test-service.dns-9153.svc jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc]

Jan 20 22:45:59.912: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.919: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.924: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.928: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.932: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.936: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.940: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.944: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.981: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.985: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.989: INFO: Unable to read jessie_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.993: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:45:59.996: INFO: Unable to read jessie_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:00.001: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:00.005: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:00.008: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:00.031: INFO: Lookups using dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9153 wheezy_tcp@dns-test-service.dns-9153 wheezy_udp@dns-test-service.dns-9153.svc wheezy_tcp@dns-test-service.dns-9153.svc wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9153 jessie_tcp@dns-test-service.dns-9153 jessie_udp@dns-test-service.dns-9153.svc jessie_tcp@dns-test-service.dns-9153.svc jessie_udp@_http._tcp.dns-test-service.dns-9153.svc jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc]

Jan 20 22:46:04.911: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:04.915: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:04.925: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:04.931: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:04.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:04.939: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:04.948: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:04.955: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:04.997: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:05.000: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:05.006: INFO: Unable to read jessie_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:05.011: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:05.019: INFO: Unable to read jessie_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:05.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:05.027: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:05.032: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:05.093: INFO: Lookups using dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9153 wheezy_tcp@dns-test-service.dns-9153 wheezy_udp@dns-test-service.dns-9153.svc wheezy_tcp@dns-test-service.dns-9153.svc wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9153 jessie_tcp@dns-test-service.dns-9153 jessie_udp@dns-test-service.dns-9153.svc jessie_tcp@dns-test-service.dns-9153.svc jessie_udp@_http._tcp.dns-test-service.dns-9153.svc jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc]

Jan 20 22:46:09.915: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:09.934: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:09.947: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:09.954: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:09.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:09.972: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:09.975: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:09.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.025: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.031: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.037: INFO: Unable to read jessie_udp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.044: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153 from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.049: INFO: Unable to read jessie_udp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.055: INFO: Unable to read jessie_tcp@dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.060: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.070: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc from pod dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01: the server could not find the requested resource (get pods dns-test-c34866e3-3697-4570-96cd-1079f4ffff01)
Jan 20 22:46:10.096: INFO: Lookups using dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9153 wheezy_tcp@dns-test-service.dns-9153 wheezy_udp@dns-test-service.dns-9153.svc wheezy_tcp@dns-test-service.dns-9153.svc wheezy_udp@_http._tcp.dns-test-service.dns-9153.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9153.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9153 jessie_tcp@dns-test-service.dns-9153 jessie_udp@dns-test-service.dns-9153.svc jessie_tcp@dns-test-service.dns-9153.svc jessie_udp@_http._tcp.dns-test-service.dns-9153.svc jessie_tcp@_http._tcp.dns-test-service.dns-9153.svc]

Jan 20 22:46:15.106: INFO: DNS probes using dns-9153/dns-test-c34866e3-3697-4570-96cd-1079f4ffff01 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:46:15.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9153" for this suite.

• [SLOW TEST:43.205 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":258,"skipped":4183,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:46:15.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 20 22:46:15.529: INFO: Waiting up to 5m0s for pod "pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0" in namespace "emptydir-131" to be "success or failure"
Jan 20 22:46:15.552: INFO: Pod "pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 23.386103ms
Jan 20 22:46:17.558: INFO: Pod "pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029264107s
Jan 20 22:46:19.565: INFO: Pod "pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036062775s
Jan 20 22:46:21.575: INFO: Pod "pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045683552s
Jan 20 22:46:23.586: INFO: Pod "pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057241411s
Jan 20 22:46:25.596: INFO: Pod "pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066538364s
STEP: Saw pod success
Jan 20 22:46:25.596: INFO: Pod "pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0" satisfied condition "success or failure"
Jan 20 22:46:25.601: INFO: Trying to get logs from node jerma-node pod pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0 container test-container: 
STEP: delete the pod
Jan 20 22:46:25.647: INFO: Waiting for pod pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0 to disappear
Jan 20 22:46:25.698: INFO: Pod pod-2a63d3e9-a1f6-40ce-ab56-a3c3a0040ef0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:46:25.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-131" for this suite.

• [SLOW TEST:10.359 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:46:25.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-3b6927cd-0509-41c0-8997-a65e417c9900
STEP: Creating secret with name s-test-opt-upd-00bb2789-61f4-419d-a0d9-d08ea90e317d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3b6927cd-0509-41c0-8997-a65e417c9900
STEP: Updating secret s-test-opt-upd-00bb2789-61f4-419d-a0d9-d08ea90e317d
STEP: Creating secret with name s-test-opt-create-aa6a07c0-a5c3-4267-bf6d-2ff383e06ec4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:48:11.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8683" for this suite.

• [SLOW TEST:106.011 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4224,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:48:11.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0120 22:48:56.309455       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 20 22:48:56.309: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:48:56.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1165" for this suite.

• [SLOW TEST:44.602 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":261,"skipped":4245,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:48:56.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 20 22:48:56.431: INFO: >>> kubeConfig: /root/.kube/config
Jan 20 22:48:59.688: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:49:18.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2049" for this suite.

• [SLOW TEST:21.836 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":262,"skipped":4247,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:49:18.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:49:18.270: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 20 22:49:20.348: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:49:21.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1294" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":263,"skipped":4277,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:49:21.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:49:22.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4298" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":264,"skipped":4280,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:49:22.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 20 22:49:22.671: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:49:46.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1353" for this suite.

• [SLOW TEST:23.549 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":265,"skipped":4307,"failed":0}
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:49:46.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 20 22:49:46.155: INFO: PodSpec: initContainers in spec.initContainers
Jan 20 22:50:46.521: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-48df2f7d-44f1-4567-b472-81b59a136235", GenerateName:"", Namespace:"init-container-1652", SelfLink:"/api/v1/namespaces/init-container-1652/pods/pod-init-48df2f7d-44f1-4567-b472-81b59a136235", UID:"40dfe414-c7dc-45ea-8d03-3a61ee303c65", ResourceVersion:"3274441", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715157386, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"155448238"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4krrs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006a50e40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4krrs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4krrs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4krrs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0058426e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004264420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005842770)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005842790)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005842798), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00584279c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157386, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157386, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157386, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157386, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0041f6260), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a269a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a26a10)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://058fde2907ac95b5ff4dc96f1679dbc5b24911334871a61c5236ccdae3fa4af5", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0041f62a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0041f6280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00584285f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:50:46.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1652" for this suite.

• [SLOW TEST:60.536 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":266,"skipped":4311,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:50:46.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-465d6200-9720-4af5-9092-d2485f0329b5 in namespace container-probe-3877
Jan 20 22:50:54.772: INFO: Started pod busybox-465d6200-9720-4af5-9092-d2485f0329b5 in namespace container-probe-3877
STEP: checking the pod's current state and verifying that restartCount is present
Jan 20 22:50:54.778: INFO: Initial restart count of pod busybox-465d6200-9720-4af5-9092-d2485f0329b5 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:54:55.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3877" for this suite.

• [SLOW TEST:248.617 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4322,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:54:55.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:54:55.289: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:55:00.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8952" for this suite.

• [SLOW TEST:5.329 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":268,"skipped":4390,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:55:00.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-bd2e80ca-134d-4bab-9703-4eac75d5175a
STEP: Creating a pod to test consume secrets
Jan 20 22:55:00.673: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-613db614-2072-4455-b071-1611995d6090" in namespace "projected-9347" to be "success or failure"
Jan 20 22:55:00.692: INFO: Pod "pod-projected-secrets-613db614-2072-4455-b071-1611995d6090": Phase="Pending", Reason="", readiness=false. Elapsed: 18.815355ms
Jan 20 22:55:02.701: INFO: Pod "pod-projected-secrets-613db614-2072-4455-b071-1611995d6090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027866921s
Jan 20 22:55:04.710: INFO: Pod "pod-projected-secrets-613db614-2072-4455-b071-1611995d6090": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036217933s
Jan 20 22:55:06.726: INFO: Pod "pod-projected-secrets-613db614-2072-4455-b071-1611995d6090": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052046471s
Jan 20 22:55:08.736: INFO: Pod "pod-projected-secrets-613db614-2072-4455-b071-1611995d6090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062178902s
STEP: Saw pod success
Jan 20 22:55:08.736: INFO: Pod "pod-projected-secrets-613db614-2072-4455-b071-1611995d6090" satisfied condition "success or failure"
Jan 20 22:55:08.739: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-613db614-2072-4455-b071-1611995d6090 container projected-secret-volume-test: 
STEP: delete the pod
Jan 20 22:55:08.781: INFO: Waiting for pod pod-projected-secrets-613db614-2072-4455-b071-1611995d6090 to disappear
Jan 20 22:55:08.786: INFO: Pod pod-projected-secrets-613db614-2072-4455-b071-1611995d6090 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:55:08.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9347" for this suite.

• [SLOW TEST:8.264 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4429,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:55:08.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-7c3ff24d-6a29-4f33-9ff7-55ff73026c34
STEP: Creating a pod to test consume configMaps
Jan 20 22:55:09.040: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef" in namespace "configmap-446" to be "success or failure"
Jan 20 22:55:09.079: INFO: Pod "pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef": Phase="Pending", Reason="", readiness=false. Elapsed: 38.205933ms
Jan 20 22:55:11.093: INFO: Pod "pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05242323s
Jan 20 22:55:13.099: INFO: Pod "pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058322082s
Jan 20 22:55:15.167: INFO: Pod "pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126240533s
Jan 20 22:55:17.180: INFO: Pod "pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139376269s
Jan 20 22:55:19.209: INFO: Pod "pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168271798s
STEP: Saw pod success
Jan 20 22:55:19.209: INFO: Pod "pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef" satisfied condition "success or failure"
Jan 20 22:55:19.212: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef container configmap-volume-test: 
STEP: delete the pod
Jan 20 22:55:19.292: INFO: Waiting for pod pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef to disappear
Jan 20 22:55:19.298: INFO: Pod pod-configmaps-e6be6d0e-0007-4fd1-9511-e978a7a92cef no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:55:19.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-446" for this suite.

• [SLOW TEST:10.554 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:55:19.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 20 22:55:20.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 20 22:55:22.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:55:24.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 20 22:55:26.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715157720, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 20 22:55:29.606: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:55:29.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-459" for this suite.
STEP: Destroying namespace "webhook-459-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.524 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":271,"skipped":4473,"failed":0}
SSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:55:29.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 20 22:55:50.036: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:50.036: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:50.076877       9 log.go:172] (0xc0029629a0) (0xc0007e7b80) Create stream
I0120 22:55:50.077022       9 log.go:172] (0xc0029629a0) (0xc0007e7b80) Stream added, broadcasting: 1
I0120 22:55:50.081724       9 log.go:172] (0xc0029629a0) Reply frame received for 1
I0120 22:55:50.081835       9 log.go:172] (0xc0029629a0) (0xc001d2a960) Create stream
I0120 22:55:50.081855       9 log.go:172] (0xc0029629a0) (0xc001d2a960) Stream added, broadcasting: 3
I0120 22:55:50.084433       9 log.go:172] (0xc0029629a0) Reply frame received for 3
I0120 22:55:50.084469       9 log.go:172] (0xc0029629a0) (0xc001fce0a0) Create stream
I0120 22:55:50.084485       9 log.go:172] (0xc0029629a0) (0xc001fce0a0) Stream added, broadcasting: 5
I0120 22:55:50.086083       9 log.go:172] (0xc0029629a0) Reply frame received for 5
I0120 22:55:50.176547       9 log.go:172] (0xc0029629a0) Data frame received for 3
I0120 22:55:50.176609       9 log.go:172] (0xc001d2a960) (3) Data frame handling
I0120 22:55:50.176643       9 log.go:172] (0xc001d2a960) (3) Data frame sent
I0120 22:55:50.252469       9 log.go:172] (0xc0029629a0) Data frame received for 1
I0120 22:55:50.252577       9 log.go:172] (0xc0029629a0) (0xc001d2a960) Stream removed, broadcasting: 3
I0120 22:55:50.252669       9 log.go:172] (0xc0007e7b80) (1) Data frame handling
I0120 22:55:50.252737       9 log.go:172] (0xc0007e7b80) (1) Data frame sent
I0120 22:55:50.252758       9 log.go:172] (0xc0029629a0) (0xc001fce0a0) Stream removed, broadcasting: 5
I0120 22:55:50.252879       9 log.go:172] (0xc0029629a0) (0xc0007e7b80) Stream removed, broadcasting: 1
I0120 22:55:50.252935       9 log.go:172] (0xc0029629a0) Go away received
I0120 22:55:50.253498       9 log.go:172] (0xc0029629a0) (0xc0007e7b80) Stream removed, broadcasting: 1
I0120 22:55:50.253527       9 log.go:172] (0xc0029629a0) (0xc001d2a960) Stream removed, broadcasting: 3
I0120 22:55:50.253547       9 log.go:172] (0xc0029629a0) (0xc001fce0a0) Stream removed, broadcasting: 5
Jan 20 22:55:50.253: INFO: Exec stderr: ""
Jan 20 22:55:50.253: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:50.253: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:50.318885       9 log.go:172] (0xc0006c7c30) (0xc001d2afa0) Create stream
I0120 22:55:50.319254       9 log.go:172] (0xc0006c7c30) (0xc001d2afa0) Stream added, broadcasting: 1
I0120 22:55:50.336601       9 log.go:172] (0xc0006c7c30) Reply frame received for 1
I0120 22:55:50.337219       9 log.go:172] (0xc0006c7c30) (0xc001a3cc80) Create stream
I0120 22:55:50.337301       9 log.go:172] (0xc0006c7c30) (0xc001a3cc80) Stream added, broadcasting: 3
I0120 22:55:50.341247       9 log.go:172] (0xc0006c7c30) Reply frame received for 3
I0120 22:55:50.341475       9 log.go:172] (0xc0006c7c30) (0xc000e76320) Create stream
I0120 22:55:50.341510       9 log.go:172] (0xc0006c7c30) (0xc000e76320) Stream added, broadcasting: 5
I0120 22:55:50.345206       9 log.go:172] (0xc0006c7c30) Reply frame received for 5
I0120 22:55:50.465413       9 log.go:172] (0xc0006c7c30) Data frame received for 3
I0120 22:55:50.465776       9 log.go:172] (0xc001a3cc80) (3) Data frame handling
I0120 22:55:50.465876       9 log.go:172] (0xc001a3cc80) (3) Data frame sent
I0120 22:55:50.605171       9 log.go:172] (0xc0006c7c30) (0xc001a3cc80) Stream removed, broadcasting: 3
I0120 22:55:50.606242       9 log.go:172] (0xc0006c7c30) Data frame received for 1
I0120 22:55:50.606701       9 log.go:172] (0xc0006c7c30) (0xc000e76320) Stream removed, broadcasting: 5
I0120 22:55:50.606963       9 log.go:172] (0xc001d2afa0) (1) Data frame handling
I0120 22:55:50.607035       9 log.go:172] (0xc001d2afa0) (1) Data frame sent
I0120 22:55:50.607097       9 log.go:172] (0xc0006c7c30) (0xc001d2afa0) Stream removed, broadcasting: 1
I0120 22:55:50.607204       9 log.go:172] (0xc0006c7c30) Go away received
I0120 22:55:50.608082       9 log.go:172] (0xc0006c7c30) (0xc001d2afa0) Stream removed, broadcasting: 1
I0120 22:55:50.608134       9 log.go:172] (0xc0006c7c30) (0xc001a3cc80) Stream removed, broadcasting: 3
I0120 22:55:50.608168       9 log.go:172] (0xc0006c7c30) (0xc000e76320) Stream removed, broadcasting: 5
Jan 20 22:55:50.608: INFO: Exec stderr: ""
Jan 20 22:55:50.608: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:50.608: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:50.670754       9 log.go:172] (0xc001720160) (0xc001fce500) Create stream
I0120 22:55:50.671408       9 log.go:172] (0xc001720160) (0xc001fce500) Stream added, broadcasting: 1
I0120 22:55:50.682809       9 log.go:172] (0xc001720160) Reply frame received for 1
I0120 22:55:50.682944       9 log.go:172] (0xc001720160) (0xc001fce5a0) Create stream
I0120 22:55:50.682974       9 log.go:172] (0xc001720160) (0xc001fce5a0) Stream added, broadcasting: 3
I0120 22:55:50.684647       9 log.go:172] (0xc001720160) Reply frame received for 3
I0120 22:55:50.684725       9 log.go:172] (0xc001720160) (0xc000e763c0) Create stream
I0120 22:55:50.684768       9 log.go:172] (0xc001720160) (0xc000e763c0) Stream added, broadcasting: 5
I0120 22:55:50.687553       9 log.go:172] (0xc001720160) Reply frame received for 5
I0120 22:55:50.784978       9 log.go:172] (0xc001720160) Data frame received for 3
I0120 22:55:50.785394       9 log.go:172] (0xc001fce5a0) (3) Data frame handling
I0120 22:55:50.785451       9 log.go:172] (0xc001fce5a0) (3) Data frame sent
I0120 22:55:50.871213       9 log.go:172] (0xc001720160) Data frame received for 1
I0120 22:55:50.871648       9 log.go:172] (0xc001720160) (0xc001fce5a0) Stream removed, broadcasting: 3
I0120 22:55:50.872092       9 log.go:172] (0xc001fce500) (1) Data frame handling
I0120 22:55:50.872245       9 log.go:172] (0xc001fce500) (1) Data frame sent
I0120 22:55:50.872503       9 log.go:172] (0xc001720160) (0xc000e763c0) Stream removed, broadcasting: 5
I0120 22:55:50.872638       9 log.go:172] (0xc001720160) (0xc001fce500) Stream removed, broadcasting: 1
I0120 22:55:50.873075       9 log.go:172] (0xc001720160) (0xc001fce500) Stream removed, broadcasting: 1
I0120 22:55:50.873097       9 log.go:172] (0xc001720160) (0xc001fce5a0) Stream removed, broadcasting: 3
I0120 22:55:50.873159       9 log.go:172] (0xc001720160) (0xc000e763c0) Stream removed, broadcasting: 5
Jan 20 22:55:50.873: INFO: Exec stderr: ""
I0120 22:55:50.873272       9 log.go:172] (0xc001720160) Go away received
Jan 20 22:55:50.873: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:50.873: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:50.921350       9 log.go:172] (0xc001720790) (0xc001fce820) Create stream
I0120 22:55:50.921551       9 log.go:172] (0xc001720790) (0xc001fce820) Stream added, broadcasting: 1
I0120 22:55:50.933304       9 log.go:172] (0xc001720790) Reply frame received for 1
I0120 22:55:50.933349       9 log.go:172] (0xc001720790) (0xc0007e7e00) Create stream
I0120 22:55:50.933364       9 log.go:172] (0xc001720790) (0xc0007e7e00) Stream added, broadcasting: 3
I0120 22:55:50.934482       9 log.go:172] (0xc001720790) Reply frame received for 3
I0120 22:55:50.934503       9 log.go:172] (0xc001720790) (0xc000e76460) Create stream
I0120 22:55:50.934513       9 log.go:172] (0xc001720790) (0xc000e76460) Stream added, broadcasting: 5
I0120 22:55:50.935491       9 log.go:172] (0xc001720790) Reply frame received for 5
I0120 22:55:50.996981       9 log.go:172] (0xc001720790) Data frame received for 3
I0120 22:55:50.997003       9 log.go:172] (0xc0007e7e00) (3) Data frame handling
I0120 22:55:50.997024       9 log.go:172] (0xc0007e7e00) (3) Data frame sent
I0120 22:55:51.062268       9 log.go:172] (0xc001720790) (0xc0007e7e00) Stream removed, broadcasting: 3
I0120 22:55:51.062328       9 log.go:172] (0xc001720790) Data frame received for 1
I0120 22:55:51.062364       9 log.go:172] (0xc001720790) (0xc000e76460) Stream removed, broadcasting: 5
I0120 22:55:51.062422       9 log.go:172] (0xc001fce820) (1) Data frame handling
I0120 22:55:51.062437       9 log.go:172] (0xc001fce820) (1) Data frame sent
I0120 22:55:51.062447       9 log.go:172] (0xc001720790) (0xc001fce820) Stream removed, broadcasting: 1
I0120 22:55:51.062465       9 log.go:172] (0xc001720790) Go away received
I0120 22:55:51.062644       9 log.go:172] (0xc001720790) (0xc001fce820) Stream removed, broadcasting: 1
I0120 22:55:51.062657       9 log.go:172] (0xc001720790) (0xc0007e7e00) Stream removed, broadcasting: 3
I0120 22:55:51.062669       9 log.go:172] (0xc001720790) (0xc000e76460) Stream removed, broadcasting: 5
Jan 20 22:55:51.062: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 20 22:55:51.062: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:51.062: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:51.106335       9 log.go:172] (0xc002963130) (0xc00046e280) Create stream
I0120 22:55:51.106586       9 log.go:172] (0xc002963130) (0xc00046e280) Stream added, broadcasting: 1
I0120 22:55:51.112369       9 log.go:172] (0xc002963130) Reply frame received for 1
I0120 22:55:51.112459       9 log.go:172] (0xc002963130) (0xc001d2b2c0) Create stream
I0120 22:55:51.112494       9 log.go:172] (0xc002963130) (0xc001d2b2c0) Stream added, broadcasting: 3
I0120 22:55:51.115363       9 log.go:172] (0xc002963130) Reply frame received for 3
I0120 22:55:51.115418       9 log.go:172] (0xc002963130) (0xc000e76500) Create stream
I0120 22:55:51.115448       9 log.go:172] (0xc002963130) (0xc000e76500) Stream added, broadcasting: 5
I0120 22:55:51.117607       9 log.go:172] (0xc002963130) Reply frame received for 5
I0120 22:55:51.191461       9 log.go:172] (0xc002963130) Data frame received for 3
I0120 22:55:51.191535       9 log.go:172] (0xc001d2b2c0) (3) Data frame handling
I0120 22:55:51.191563       9 log.go:172] (0xc001d2b2c0) (3) Data frame sent
I0120 22:55:51.260182       9 log.go:172] (0xc002963130) Data frame received for 1
I0120 22:55:51.260490       9 log.go:172] (0xc002963130) (0xc001d2b2c0) Stream removed, broadcasting: 3
I0120 22:55:51.260646       9 log.go:172] (0xc00046e280) (1) Data frame handling
I0120 22:55:51.260678       9 log.go:172] (0xc00046e280) (1) Data frame sent
I0120 22:55:51.260710       9 log.go:172] (0xc002963130) (0xc000e76500) Stream removed, broadcasting: 5
I0120 22:55:51.260807       9 log.go:172] (0xc002963130) (0xc00046e280) Stream removed, broadcasting: 1
I0120 22:55:51.260879       9 log.go:172] (0xc002963130) Go away received
I0120 22:55:51.261252       9 log.go:172] (0xc002963130) (0xc00046e280) Stream removed, broadcasting: 1
I0120 22:55:51.261276       9 log.go:172] (0xc002963130) (0xc001d2b2c0) Stream removed, broadcasting: 3
I0120 22:55:51.261333       9 log.go:172] (0xc002963130) (0xc000e76500) Stream removed, broadcasting: 5
Jan 20 22:55:51.261: INFO: Exec stderr: ""
Jan 20 22:55:51.261: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:51.261: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:51.300284       9 log.go:172] (0xc0045b0420) (0xc000e76d20) Create stream
I0120 22:55:51.300421       9 log.go:172] (0xc0045b0420) (0xc000e76d20) Stream added, broadcasting: 1
I0120 22:55:51.304170       9 log.go:172] (0xc0045b0420) Reply frame received for 1
I0120 22:55:51.304343       9 log.go:172] (0xc0045b0420) (0xc001d2b400) Create stream
I0120 22:55:51.304369       9 log.go:172] (0xc0045b0420) (0xc001d2b400) Stream added, broadcasting: 3
I0120 22:55:51.305659       9 log.go:172] (0xc0045b0420) Reply frame received for 3
I0120 22:55:51.305686       9 log.go:172] (0xc0045b0420) (0xc000e76dc0) Create stream
I0120 22:55:51.305700       9 log.go:172] (0xc0045b0420) (0xc000e76dc0) Stream added, broadcasting: 5
I0120 22:55:51.306982       9 log.go:172] (0xc0045b0420) Reply frame received for 5
I0120 22:55:51.377355       9 log.go:172] (0xc0045b0420) Data frame received for 3
I0120 22:55:51.377512       9 log.go:172] (0xc001d2b400) (3) Data frame handling
I0120 22:55:51.377560       9 log.go:172] (0xc001d2b400) (3) Data frame sent
I0120 22:55:51.472386       9 log.go:172] (0xc0045b0420) (0xc000e76dc0) Stream removed, broadcasting: 5
I0120 22:55:51.472573       9 log.go:172] (0xc0045b0420) Data frame received for 1
I0120 22:55:51.472631       9 log.go:172] (0xc0045b0420) (0xc001d2b400) Stream removed, broadcasting: 3
I0120 22:55:51.472732       9 log.go:172] (0xc000e76d20) (1) Data frame handling
I0120 22:55:51.472842       9 log.go:172] (0xc000e76d20) (1) Data frame sent
I0120 22:55:51.472864       9 log.go:172] (0xc0045b0420) (0xc000e76d20) Stream removed, broadcasting: 1
I0120 22:55:51.472960       9 log.go:172] (0xc0045b0420) Go away received
I0120 22:55:51.473424       9 log.go:172] (0xc0045b0420) (0xc000e76d20) Stream removed, broadcasting: 1
I0120 22:55:51.473446       9 log.go:172] (0xc0045b0420) (0xc001d2b400) Stream removed, broadcasting: 3
I0120 22:55:51.473451       9 log.go:172] (0xc0045b0420) (0xc000e76dc0) Stream removed, broadcasting: 5
Jan 20 22:55:51.473: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 20 22:55:51.473: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:51.474: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:51.593538       9 log.go:172] (0xc001720e70) (0xc001fcf040) Create stream
I0120 22:55:51.593705       9 log.go:172] (0xc001720e70) (0xc001fcf040) Stream added, broadcasting: 1
I0120 22:55:51.601088       9 log.go:172] (0xc001720e70) Reply frame received for 1
I0120 22:55:51.601126       9 log.go:172] (0xc001720e70) (0xc000e76e60) Create stream
I0120 22:55:51.601134       9 log.go:172] (0xc001720e70) (0xc000e76e60) Stream added, broadcasting: 3
I0120 22:55:51.603574       9 log.go:172] (0xc001720e70) Reply frame received for 3
I0120 22:55:51.603669       9 log.go:172] (0xc001720e70) (0xc00046e500) Create stream
I0120 22:55:51.603708       9 log.go:172] (0xc001720e70) (0xc00046e500) Stream added, broadcasting: 5
I0120 22:55:51.607177       9 log.go:172] (0xc001720e70) Reply frame received for 5
I0120 22:55:51.677322       9 log.go:172] (0xc001720e70) Data frame received for 3
I0120 22:55:51.677455       9 log.go:172] (0xc000e76e60) (3) Data frame handling
I0120 22:55:51.677513       9 log.go:172] (0xc000e76e60) (3) Data frame sent
I0120 22:55:51.765411       9 log.go:172] (0xc001720e70) Data frame received for 1
I0120 22:55:51.765554       9 log.go:172] (0xc001720e70) (0xc000e76e60) Stream removed, broadcasting: 3
I0120 22:55:51.765784       9 log.go:172] (0xc001fcf040) (1) Data frame handling
I0120 22:55:51.765869       9 log.go:172] (0xc001fcf040) (1) Data frame sent
I0120 22:55:51.765959       9 log.go:172] (0xc001720e70) (0xc00046e500) Stream removed, broadcasting: 5
I0120 22:55:51.766027       9 log.go:172] (0xc001720e70) (0xc001fcf040) Stream removed, broadcasting: 1
I0120 22:55:51.766068       9 log.go:172] (0xc001720e70) Go away received
I0120 22:55:51.766614       9 log.go:172] (0xc001720e70) (0xc001fcf040) Stream removed, broadcasting: 1
I0120 22:55:51.766700       9 log.go:172] (0xc001720e70) (0xc000e76e60) Stream removed, broadcasting: 3
I0120 22:55:51.766723       9 log.go:172] (0xc001720e70) (0xc00046e500) Stream removed, broadcasting: 5
Jan 20 22:55:51.766: INFO: Exec stderr: ""
Jan 20 22:55:51.767: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:51.767: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:51.816541       9 log.go:172] (0xc0045b0a50) (0xc000e77680) Create stream
I0120 22:55:51.816780       9 log.go:172] (0xc0045b0a50) (0xc000e77680) Stream added, broadcasting: 1
I0120 22:55:51.827806       9 log.go:172] (0xc0045b0a50) Reply frame received for 1
I0120 22:55:51.827870       9 log.go:172] (0xc0045b0a50) (0xc001fcf180) Create stream
I0120 22:55:51.827886       9 log.go:172] (0xc0045b0a50) (0xc001fcf180) Stream added, broadcasting: 3
I0120 22:55:51.829447       9 log.go:172] (0xc0045b0a50) Reply frame received for 3
I0120 22:55:51.829507       9 log.go:172] (0xc0045b0a50) (0xc001a3cdc0) Create stream
I0120 22:55:51.829527       9 log.go:172] (0xc0045b0a50) (0xc001a3cdc0) Stream added, broadcasting: 5
I0120 22:55:51.831258       9 log.go:172] (0xc0045b0a50) Reply frame received for 5
I0120 22:55:51.906690       9 log.go:172] (0xc0045b0a50) Data frame received for 3
I0120 22:55:51.906787       9 log.go:172] (0xc001fcf180) (3) Data frame handling
I0120 22:55:51.906825       9 log.go:172] (0xc001fcf180) (3) Data frame sent
I0120 22:55:51.971458       9 log.go:172] (0xc0045b0a50) Data frame received for 1
I0120 22:55:51.971666       9 log.go:172] (0xc000e77680) (1) Data frame handling
I0120 22:55:51.971697       9 log.go:172] (0xc000e77680) (1) Data frame sent
I0120 22:55:51.972194       9 log.go:172] (0xc0045b0a50) (0xc000e77680) Stream removed, broadcasting: 1
I0120 22:55:51.972505       9 log.go:172] (0xc0045b0a50) (0xc001fcf180) Stream removed, broadcasting: 3
I0120 22:55:51.972632       9 log.go:172] (0xc0045b0a50) (0xc001a3cdc0) Stream removed, broadcasting: 5
I0120 22:55:51.972684       9 log.go:172] (0xc0045b0a50) (0xc000e77680) Stream removed, broadcasting: 1
I0120 22:55:51.972710       9 log.go:172] (0xc0045b0a50) (0xc001fcf180) Stream removed, broadcasting: 3
I0120 22:55:51.972732       9 log.go:172] (0xc0045b0a50) (0xc001a3cdc0) Stream removed, broadcasting: 5
I0120 22:55:51.972811       9 log.go:172] (0xc0045b0a50) Go away received
Jan 20 22:55:51.972: INFO: Exec stderr: ""
Jan 20 22:55:51.972: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:51.972: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:52.007264       9 log.go:172] (0xc0017213f0) (0xc001fcf360) Create stream
I0120 22:55:52.007396       9 log.go:172] (0xc0017213f0) (0xc001fcf360) Stream added, broadcasting: 1
I0120 22:55:52.017030       9 log.go:172] (0xc0017213f0) Reply frame received for 1
I0120 22:55:52.017088       9 log.go:172] (0xc0017213f0) (0xc000e777c0) Create stream
I0120 22:55:52.017105       9 log.go:172] (0xc0017213f0) (0xc000e777c0) Stream added, broadcasting: 3
I0120 22:55:52.018364       9 log.go:172] (0xc0017213f0) Reply frame received for 3
I0120 22:55:52.018390       9 log.go:172] (0xc0017213f0) (0xc00046e6e0) Create stream
I0120 22:55:52.018406       9 log.go:172] (0xc0017213f0) (0xc00046e6e0) Stream added, broadcasting: 5
I0120 22:55:52.019906       9 log.go:172] (0xc0017213f0) Reply frame received for 5
I0120 22:55:52.092355       9 log.go:172] (0xc0017213f0) Data frame received for 3
I0120 22:55:52.092789       9 log.go:172] (0xc000e777c0) (3) Data frame handling
I0120 22:55:52.092862       9 log.go:172] (0xc000e777c0) (3) Data frame sent
I0120 22:55:52.173294       9 log.go:172] (0xc0017213f0) Data frame received for 1
I0120 22:55:52.173620       9 log.go:172] (0xc001fcf360) (1) Data frame handling
I0120 22:55:52.173757       9 log.go:172] (0xc001fcf360) (1) Data frame sent
I0120 22:55:52.173819       9 log.go:172] (0xc0017213f0) (0xc001fcf360) Stream removed, broadcasting: 1
I0120 22:55:52.174279       9 log.go:172] (0xc0017213f0) (0xc000e777c0) Stream removed, broadcasting: 3
I0120 22:55:52.174634       9 log.go:172] (0xc0017213f0) (0xc00046e6e0) Stream removed, broadcasting: 5
I0120 22:55:52.174685       9 log.go:172] (0xc0017213f0) Go away received
I0120 22:55:52.174762       9 log.go:172] (0xc0017213f0) (0xc001fcf360) Stream removed, broadcasting: 1
I0120 22:55:52.174785       9 log.go:172] (0xc0017213f0) (0xc000e777c0) Stream removed, broadcasting: 3
I0120 22:55:52.174798       9 log.go:172] (0xc0017213f0) (0xc00046e6e0) Stream removed, broadcasting: 5
Jan 20 22:55:52.174: INFO: Exec stderr: ""
Jan 20 22:55:52.174: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6397 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 22:55:52.175: INFO: >>> kubeConfig: /root/.kube/config
I0120 22:55:52.269389       9 log.go:172] (0xc001721970) (0xc001fcf680) Create stream
I0120 22:55:52.270016       9 log.go:172] (0xc001721970) (0xc001fcf680) Stream added, broadcasting: 1
I0120 22:55:52.273928       9 log.go:172] (0xc001721970) Reply frame received for 1
I0120 22:55:52.274053       9 log.go:172] (0xc001721970) (0xc000e77cc0) Create stream
I0120 22:55:52.274218       9 log.go:172] (0xc001721970) (0xc000e77cc0) Stream added, broadcasting: 3
I0120 22:55:52.278960       9 log.go:172] (0xc001721970) Reply frame received for 3
I0120 22:55:52.279092       9 log.go:172] (0xc001721970) (0xc001fcf860) Create stream
I0120 22:55:52.279103       9 log.go:172] (0xc001721970) (0xc001fcf860) Stream added, broadcasting: 5
I0120 22:55:52.280911       9 log.go:172] (0xc001721970) Reply frame received for 5
I0120 22:55:52.400172       9 log.go:172] (0xc001721970) Data frame received for 3
I0120 22:55:52.400299       9 log.go:172] (0xc000e77cc0) (3) Data frame handling
I0120 22:55:52.400328       9 log.go:172] (0xc000e77cc0) (3) Data frame sent
I0120 22:55:52.485296       9 log.go:172] (0xc001721970) Data frame received for 1
I0120 22:55:52.485593       9 log.go:172] (0xc001721970) (0xc000e77cc0) Stream removed, broadcasting: 3
I0120 22:55:52.485703       9 log.go:172] (0xc001fcf680) (1) Data frame handling
I0120 22:55:52.485756       9 log.go:172] (0xc001fcf680) (1) Data frame sent
I0120 22:55:52.486079       9 log.go:172] (0xc001721970) (0xc001fcf860) Stream removed, broadcasting: 5
I0120 22:55:52.486226       9 log.go:172] (0xc001721970) (0xc001fcf680) Stream removed, broadcasting: 1
I0120 22:55:52.486910       9 log.go:172] (0xc001721970) Go away received
I0120 22:55:52.487939       9 log.go:172] (0xc001721970) (0xc001fcf680) Stream removed, broadcasting: 1
I0120 22:55:52.488039       9 log.go:172] (0xc001721970) (0xc000e77cc0) Stream removed, broadcasting: 3
I0120 22:55:52.488071       9 log.go:172] (0xc001721970) (0xc001fcf860) Stream removed, broadcasting: 5
Jan 20 22:55:52.488: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:55:52.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6397" for this suite.

• [SLOW TEST:22.636 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4476,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:55:52.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 20 22:55:52.685: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a" in namespace "downward-api-5225" to be "success or failure"
Jan 20 22:55:52.691: INFO: Pod "downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.973504ms
Jan 20 22:55:54.703: INFO: Pod "downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017572318s
Jan 20 22:55:57.230: INFO: Pod "downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544930771s
Jan 20 22:55:59.241: INFO: Pod "downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.555117688s
Jan 20 22:56:01.248: INFO: Pod "downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.562123429s
STEP: Saw pod success
Jan 20 22:56:01.248: INFO: Pod "downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a" satisfied condition "success or failure"
Jan 20 22:56:01.252: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a container client-container: 
STEP: delete the pod
Jan 20 22:56:01.796: INFO: Waiting for pod downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a to disappear
Jan 20 22:56:02.057: INFO: Pod downwardapi-volume-c8b5de69-76ec-482a-b54e-827dc292387a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:56:02.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5225" for this suite.

• [SLOW TEST:9.634 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4477,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:56:02.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 20 22:56:02.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4740'
Jan 20 22:56:05.079: INFO: stderr: ""
Jan 20 22:56:05.079: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 22:56:05.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4740'
Jan 20 22:56:05.354: INFO: stderr: ""
Jan 20 22:56:05.354: INFO: stdout: "update-demo-nautilus-k9klg update-demo-nautilus-sdm29 "
Jan 20 22:56:05.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k9klg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4740'
Jan 20 22:56:05.493: INFO: stderr: ""
Jan 20 22:56:05.493: INFO: stdout: ""
Jan 20 22:56:05.493: INFO: update-demo-nautilus-k9klg is created but not running
Jan 20 22:56:10.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4740'
Jan 20 22:56:11.039: INFO: stderr: ""
Jan 20 22:56:11.039: INFO: stdout: "update-demo-nautilus-k9klg update-demo-nautilus-sdm29 "
Jan 20 22:56:11.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k9klg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4740'
Jan 20 22:56:11.623: INFO: stderr: ""
Jan 20 22:56:11.624: INFO: stdout: ""
Jan 20 22:56:11.624: INFO: update-demo-nautilus-k9klg is created but not running
Jan 20 22:56:16.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4740'
Jan 20 22:56:16.795: INFO: stderr: ""
Jan 20 22:56:16.796: INFO: stdout: "update-demo-nautilus-k9klg update-demo-nautilus-sdm29 "
Jan 20 22:56:16.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k9klg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4740'
Jan 20 22:56:16.971: INFO: stderr: ""
Jan 20 22:56:16.972: INFO: stdout: "true"
Jan 20 22:56:16.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k9klg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4740'
Jan 20 22:56:17.098: INFO: stderr: ""
Jan 20 22:56:17.098: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 22:56:17.098: INFO: validating pod update-demo-nautilus-k9klg
Jan 20 22:56:17.103: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 22:56:17.103: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 22:56:17.104: INFO: update-demo-nautilus-k9klg is verified up and running
Jan 20 22:56:17.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sdm29 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4740'
Jan 20 22:56:17.225: INFO: stderr: ""
Jan 20 22:56:17.225: INFO: stdout: "true"
Jan 20 22:56:17.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sdm29 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4740'
Jan 20 22:56:17.308: INFO: stderr: ""
Jan 20 22:56:17.308: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 22:56:17.308: INFO: validating pod update-demo-nautilus-sdm29
Jan 20 22:56:17.316: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 22:56:17.316: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 22:56:17.316: INFO: update-demo-nautilus-sdm29 is verified up and running
STEP: using delete to clean up resources
Jan 20 22:56:17.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4740'
Jan 20 22:56:17.464: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 22:56:17.464: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 20 22:56:17.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4740'
Jan 20 22:56:17.566: INFO: stderr: "No resources found in kubectl-4740 namespace.\n"
Jan 20 22:56:17.566: INFO: stdout: ""
Jan 20 22:56:17.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4740 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 22:56:17.683: INFO: stderr: ""
Jan 20 22:56:17.683: INFO: stdout: "update-demo-nautilus-k9klg\nupdate-demo-nautilus-sdm29\n"
Jan 20 22:56:18.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4740'
Jan 20 22:56:18.417: INFO: stderr: "No resources found in kubectl-4740 namespace.\n"
Jan 20 22:56:18.417: INFO: stdout: ""
Jan 20 22:56:18.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4740 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 22:56:18.603: INFO: stderr: ""
Jan 20 22:56:18.603: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:56:18.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4740" for this suite.

• [SLOW TEST:16.462 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":274,"skipped":4485,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:56:18.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 20 22:56:19.610: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:56:22.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3370" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":275,"skipped":4490,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:56:22.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 20 22:56:22.126: INFO: >>> kubeConfig: /root/.kube/config
Jan 20 22:56:26.430: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:56:40.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4998" for this suite.

• [SLOW TEST:18.319 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":276,"skipped":4502,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:56:40.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:57:30.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6045" for this suite.

• [SLOW TEST:50.566 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4513,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 20 22:57:30.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 20 22:57:31.084: INFO: Waiting up to 5m0s for pod "pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf" in namespace "emptydir-9928" to be "success or failure"
Jan 20 22:57:31.089: INFO: Pod "pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.701997ms
Jan 20 22:57:33.096: INFO: Pod "pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01179851s
Jan 20 22:57:35.104: INFO: Pod "pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019753888s
Jan 20 22:57:37.111: INFO: Pod "pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026275916s
Jan 20 22:57:39.119: INFO: Pod "pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034803503s
STEP: Saw pod success
Jan 20 22:57:39.119: INFO: Pod "pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf" satisfied condition "success or failure"
Jan 20 22:57:39.124: INFO: Trying to get logs from node jerma-node pod pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf container test-container: 
STEP: delete the pod
Jan 20 22:57:39.171: INFO: Waiting for pod pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf to disappear
Jan 20 22:57:39.187: INFO: Pod pod-d763b59e-d9ad-46fd-80ba-4848cde97bbf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 20 22:57:39.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9928" for this suite.

• [SLOW TEST:8.307 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4518,"failed":0}
SSSSSSSSSSSSSSSSSSJan 20 22:57:39.247: INFO: Running AfterSuite actions on all nodes
Jan 20 22:57:39.248: INFO: Running AfterSuite actions on node 1
Jan 20 22:57:39.248: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0}

Ran 278 of 4814 Specs in 6505.558 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped
PASS