I0225 23:39:12.451280 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0225 23:39:12.452832 9 e2e.go:109] Starting e2e run "0ce5b58f-0b63-4464-ac09-7fcd812e15c5" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582673950 - Will randomize all specs Will run 280 of 4845 specs Feb 25 23:39:12.553: INFO: >>> kubeConfig: /root/.kube/config Feb 25 23:39:12.564: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 25 23:39:12.605: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 25 23:39:12.663: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 25 23:39:12.663: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 25 23:39:12.663: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 25 23:39:12.677: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 25 23:39:12.677: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 25 23:39:12.677: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Feb 25 23:39:12.679: INFO: kube-apiserver version: v1.17.0 Feb 25 23:39:12.679: INFO: >>> kubeConfig: /root/.kube/config Feb 25 23:39:12.687: INFO: Cluster IP family: ipv4 SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:39:12.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Feb 25 23:39:12.775: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 25 23:39:21.373: INFO: Successfully updated pod "pod-update-activedeadlineseconds-47e1f3ff-b5eb-4b7b-919a-3bc44a30f317" Feb 25 23:39:21.373: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-47e1f3ff-b5eb-4b7b-919a-3bc44a30f317" in namespace "pods-6028" to be "terminated due to deadline exceeded" Feb 25 23:39:21.418: INFO: Pod "pod-update-activedeadlineseconds-47e1f3ff-b5eb-4b7b-919a-3bc44a30f317": Phase="Running", Reason="", readiness=true. Elapsed: 44.658266ms Feb 25 23:39:23.425: INFO: Pod "pod-update-activedeadlineseconds-47e1f3ff-b5eb-4b7b-919a-3bc44a30f317": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.051712483s Feb 25 23:39:23.425: INFO: Pod "pod-update-activedeadlineseconds-47e1f3ff-b5eb-4b7b-919a-3bc44a30f317" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:39:23.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6028" for this suite. • [SLOW TEST:10.756 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:39:23.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name secret-emptykey-test-4baacb6b-839e-46b1-bf89-6b6e9ec00dd2 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:39:23.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5042" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":2,"skipped":16,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:39:23.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 25 23:39:23.742: INFO: Waiting up to 5m0s for pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f" in namespace "emptydir-8459" to be "success or failure" Feb 25 23:39:23.754: INFO: Pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.806069ms Feb 25 23:39:25.760: INFO: Pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017741333s Feb 25 23:39:27.766: INFO: Pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023567889s Feb 25 23:39:29.773: INFO: Pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030803567s Feb 25 23:39:31.782: INFO: Pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039192575s Feb 25 23:39:33.792: INFO: Pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049460771s Feb 25 23:39:35.805: INFO: Pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.062271989s STEP: Saw pod success Feb 25 23:39:35.805: INFO: Pod "pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f" satisfied condition "success or failure" Feb 25 23:39:35.811: INFO: Trying to get logs from node jerma-node pod pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f container test-container: STEP: delete the pod Feb 25 23:39:36.259: INFO: Waiting for pod pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f to disappear Feb 25 23:39:36.281: INFO: Pod pod-9234e404-c4d7-4ac3-8ad2-2bc6892f266f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:39:36.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8459" for this suite. • [SLOW TEST:12.698 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":3,"skipped":24,"failed":0} [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:39:36.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7ltcp in namespace proxy-2536 I0225 23:39:36.477013 9 runners.go:189] Created replication controller with name: proxy-service-7ltcp, namespace: proxy-2536, replica count: 1 I0225 23:39:37.529082 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0225 23:39:38.530357 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0225 23:39:39.530990 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0225 23:39:40.532152 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0225 23:39:41.532864 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0225 23:39:42.533964 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0225 23:39:43.534771 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0225 23:39:44.535416 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0225 23:39:45.536523 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0225 23:39:46.537552 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0225 23:39:47.538201 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0225 23:39:48.539084 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0225 23:39:49.539737 9 runners.go:189] proxy-service-7ltcp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 25 23:39:49.545: INFO: setup took 13.123884447s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 25 23:39:49.617: INFO: (0) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 70.861745ms) Feb 25 23:39:49.617: INFO: (0) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 71.330068ms) Feb 25 23:39:49.618: INFO: (0) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 71.804973ms) Feb 25 23:39:49.618: INFO: (0) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 71.183946ms) Feb 25 23:39:49.618: INFO: (0) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 71.857077ms) Feb 25 23:39:49.621: INFO: (0) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 74.436277ms) Feb 25 23:39:49.621: INFO: (0) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 75.070986ms) Feb 25 23:39:49.621: INFO: (0) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 74.527858ms) Feb 25 23:39:49.621: INFO: (0) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 75.132741ms) Feb 25 23:39:49.621: INFO: (0) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 73.189146ms) Feb 25 23:39:49.621: INFO: (0) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 73.995292ms) Feb 25 23:39:49.624: INFO: (0) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 78.010062ms) Feb 25 23:39:49.630: INFO: (0) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 83.713226ms) Feb 25 23:39:49.632: INFO: (0) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test<... (200; 9.225472ms) Feb 25 23:39:49.645: INFO: (1) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: ... (200; 18.254824ms) Feb 25 23:39:49.653: INFO: (1) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 18.249997ms) Feb 25 23:39:49.653: INFO: (1) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 18.38369ms) Feb 25 23:39:49.653: INFO: (1) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 18.893062ms) Feb 25 23:39:49.654: INFO: (1) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 19.502203ms) Feb 25 23:39:49.655: INFO: (1) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 20.352581ms) Feb 25 23:39:49.659: INFO: (1) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 24.134284ms) Feb 25 23:39:49.659: INFO: (1) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 24.330542ms) Feb 25 23:39:49.659: INFO: (1) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 24.919153ms) Feb 25 23:39:49.659: INFO: (1) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 24.794267ms) Feb 25 23:39:49.665: INFO: (2) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 5.285194ms) Feb 25 23:39:49.667: INFO: (2) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 7.234852ms) Feb 25 23:39:49.667: INFO: (2) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 7.751831ms) Feb 25 23:39:49.668: INFO: (2) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 8.807116ms) Feb 25 23:39:49.669: INFO: (2) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 8.785196ms) Feb 25 23:39:49.669: INFO: (2) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 9.04141ms) Feb 25 23:39:49.669: INFO: (2) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 9.236714ms) Feb 25 23:39:49.669: INFO: (2) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 9.519618ms) Feb 25 23:39:49.669: INFO: (2) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 9.785976ms) Feb 25 23:39:49.669: INFO: (2) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 9.755586ms) Feb 25 23:39:49.672: INFO: (2) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 12.11107ms) Feb 25 23:39:49.672: INFO: (2) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 12.139811ms) Feb 25 23:39:49.672: INFO: (2) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 12.314166ms) Feb 25 23:39:49.672: INFO: (2) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 12.712639ms) Feb 25 23:39:49.682: INFO: (3) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 9.580629ms) Feb 25 23:39:49.682: INFO: (3) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 10.22697ms) Feb 25 23:39:49.682: INFO: (3) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 10.224199ms) Feb 25 23:39:49.683: INFO: (3) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 10.15701ms) Feb 25 23:39:49.683: INFO: (3) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: ... (200; 11.003696ms) Feb 25 23:39:49.684: INFO: (3) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 11.066208ms) Feb 25 23:39:49.684: INFO: (3) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 11.089954ms) Feb 25 23:39:49.684: INFO: (3) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 11.39203ms) Feb 25 23:39:49.684: INFO: (3) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 12.076655ms) Feb 25 23:39:49.685: INFO: (3) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 12.375866ms) Feb 25 23:39:49.685: INFO: (3) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 12.449414ms) Feb 25 23:39:49.685: INFO: (3) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 12.251596ms) Feb 25 23:39:49.685: INFO: (3) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 12.562699ms) Feb 25 23:39:49.685: INFO: (3) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 12.649375ms) Feb 25 23:39:49.697: INFO: (4) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 12.056473ms) Feb 25 23:39:49.697: INFO: (4) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 12.211296ms) Feb 25 23:39:49.698: INFO: (4) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 12.107568ms) Feb 25 23:39:49.698: INFO: (4) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 12.324086ms) Feb 25 23:39:49.698: INFO: (4) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 13.372914ms) Feb 25 23:39:49.699: INFO: (4) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 13.578761ms) Feb 25 23:39:49.699: INFO: (4) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 13.336855ms) Feb 25 23:39:49.699: INFO: (4) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 13.478536ms) Feb 25 23:39:49.699: INFO: (4) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 13.429284ms) Feb 25 23:39:49.699: INFO: (4) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 14.023049ms) Feb 25 23:39:49.700: INFO: (4) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 14.50976ms) Feb 25 23:39:49.704: INFO: (5) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 4.140738ms) Feb 25 23:39:49.706: INFO: (5) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 5.910465ms) Feb 25 23:39:49.706: INFO: (5) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 5.615504ms) Feb 25 23:39:49.709: INFO: (5) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 9.128534ms) Feb 25 23:39:49.709: INFO: (5) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 9.799587ms) Feb 25 23:39:49.710: INFO: (5) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 9.958239ms) Feb 25 23:39:49.710: INFO: (5) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 9.72222ms) Feb 25 23:39:49.711: INFO: (5) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 10.984312ms) Feb 25 23:39:49.711: INFO: (5) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 10.692702ms) Feb 25 23:39:49.711: INFO: (5) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 11.040345ms) Feb 25 23:39:49.711: INFO: (5) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 11.235742ms) Feb 25 23:39:49.711: INFO: (5) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 10.913888ms) Feb 25 23:39:49.711: INFO: (5) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 11.410574ms) Feb 25 23:39:49.720: INFO: (6) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 8.562571ms) Feb 25 23:39:49.724: INFO: (6) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 13.010223ms) Feb 25 23:39:49.725: INFO: (6) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 12.981556ms) Feb 25 23:39:49.725: INFO: (6) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 12.92229ms) Feb 25 23:39:49.725: INFO: (6) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 13.729522ms) Feb 25 23:39:49.725: INFO: (6) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 13.390909ms) Feb 25 23:39:49.726: INFO: (6) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 14.044136ms) Feb 25 23:39:49.730: INFO: (6) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 18.556192ms) Feb 25 23:39:49.730: INFO: (6) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 18.58877ms) Feb 25 23:39:49.730: INFO: (6) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 18.629891ms) Feb 25 23:39:49.731: INFO: (6) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 19.639308ms) Feb 25 23:39:49.731: INFO: (6) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 19.387227ms) Feb 25 23:39:49.731: INFO: (6) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 20.137586ms) Feb 25 23:39:49.732: INFO: (6) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 20.313316ms) Feb 25 23:39:49.743: INFO: (7) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 10.513345ms) Feb 25 23:39:49.743: INFO: (7) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 9.617878ms) Feb 25 23:39:49.743: INFO: (7) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 10.409016ms) Feb 25 23:39:49.743: INFO: (7) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 10.374922ms) Feb 25 23:39:49.744: INFO: (7) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: ... (200; 19.695897ms) Feb 25 23:39:49.752: INFO: (7) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 18.700044ms) Feb 25 23:39:49.753: INFO: (7) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 19.264579ms) Feb 25 23:39:49.753: INFO: (7) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 19.591703ms) Feb 25 23:39:49.753: INFO: (7) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 18.65726ms) Feb 25 23:39:49.753: INFO: (7) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 18.787354ms) Feb 25 23:39:49.764: INFO: (8) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 10.676356ms) Feb 25 23:39:49.764: INFO: (8) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 11.111752ms) Feb 25 23:39:49.764: INFO: (8) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 10.168089ms) Feb 25 23:39:49.765: INFO: (8) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 10.086836ms) Feb 25 23:39:49.770: INFO: (8) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 15.230302ms) Feb 25 23:39:49.770: INFO: (8) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 15.539916ms) Feb 25 23:39:49.770: INFO: (8) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: ... (200; 19.380938ms) Feb 25 23:39:49.774: INFO: (8) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 19.61805ms) Feb 25 23:39:49.774: INFO: (8) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 20.275468ms) Feb 25 23:39:49.775: INFO: (8) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 19.797067ms) Feb 25 23:39:49.783: INFO: (9) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 8.601598ms) Feb 25 23:39:49.784: INFO: (9) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 9.445759ms) Feb 25 23:39:49.784: INFO: (9) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 9.720936ms) Feb 25 23:39:49.785: INFO: (9) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 10.110942ms) Feb 25 23:39:49.785: INFO: (9) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 9.784117ms) Feb 25 23:39:49.785: INFO: (9) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 10.093677ms) Feb 25 23:39:49.786: INFO: (9) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 25.568896ms) Feb 25 23:39:49.802: INFO: (9) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 27.431715ms) Feb 25 23:39:49.803: INFO: (9) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 27.538328ms) Feb 25 23:39:49.803: INFO: (9) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 27.546588ms) Feb 25 23:39:49.804: INFO: (9) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 28.9484ms) Feb 25 23:39:49.806: INFO: (9) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 30.936266ms) Feb 25 23:39:49.808: INFO: (9) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 33.408827ms) Feb 25 23:39:49.816: INFO: (10) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 7.001657ms) Feb 25 23:39:49.816: INFO: (10) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 6.737704ms) Feb 25 23:39:49.819: INFO: (10) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 9.824166ms) Feb 25 23:39:49.822: INFO: (10) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 12.44877ms) Feb 25 23:39:49.826: INFO: (10) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 16.498839ms) Feb 25 23:39:49.827: INFO: (10) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 17.933938ms) Feb 25 23:39:49.827: INFO: (10) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 18.216557ms) Feb 25 23:39:49.828: INFO: (10) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 18.766964ms) Feb 25 23:39:49.828: INFO: (10) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 19.015561ms) Feb 25 23:39:49.828: INFO: (10) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 19.659562ms) Feb 25 23:39:49.828: INFO: (10) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 18.945301ms) Feb 25 23:39:49.828: INFO: (10) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 19.176701ms) Feb 25 23:39:49.828: INFO: (10) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test<... (200; 19.491916ms) Feb 25 23:39:49.829: INFO: (10) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 20.087381ms) Feb 25 23:39:49.829: INFO: (10) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 20.83935ms) Feb 25 23:39:49.839: INFO: (11) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 9.397817ms) Feb 25 23:39:49.839: INFO: (11) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 9.874188ms) Feb 25 23:39:49.840: INFO: (11) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 10.430917ms) Feb 25 23:39:49.840: INFO: (11) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 10.598046ms) Feb 25 23:39:49.841: INFO: (11) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 11.12729ms) Feb 25 23:39:49.841: INFO: (11) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 11.059543ms) Feb 25 23:39:49.841: INFO: (11) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 11.232632ms) Feb 25 23:39:49.842: INFO: (11) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 12.618106ms) Feb 25 23:39:49.843: INFO: (11) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 13.94248ms) Feb 25 23:39:49.844: INFO: (11) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 14.723742ms) Feb 25 23:39:49.844: INFO: (11) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 14.834791ms) Feb 25 23:39:49.845: INFO: (11) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test<... (200; 14.699554ms) Feb 25 23:39:49.845: INFO: (11) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 15.083626ms) Feb 25 23:39:49.845: INFO: (11) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 14.85824ms) Feb 25 23:39:49.850: INFO: (12) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 5.173993ms) Feb 25 23:39:49.854: INFO: (12) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 8.827504ms) Feb 25 23:39:49.856: INFO: (12) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 10.782905ms) Feb 25 23:39:49.856: INFO: (12) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 11.019057ms) Feb 25 23:39:49.856: INFO: (12) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 11.183866ms) Feb 25 23:39:49.856: INFO: (12) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 11.626442ms) Feb 25 23:39:49.856: INFO: (12) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 11.705117ms) Feb 25 23:39:49.856: INFO: (12) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 11.347583ms) Feb 25 23:39:49.857: INFO: (12) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 11.380117ms) Feb 25 23:39:49.857: INFO: (12) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 11.842673ms) Feb 25 23:39:49.857: INFO: (12) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 12.290938ms) Feb 25 23:39:49.857: INFO: (12) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 7.974454ms) Feb 25 23:39:49.866: INFO: (13) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 8.736161ms) Feb 25 23:39:49.866: INFO: (13) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 8.736047ms) Feb 25 23:39:49.867: INFO: (13) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 9.43974ms) Feb 25 23:39:49.867: INFO: (13) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 9.125264ms) Feb 25 23:39:49.867: INFO: (13) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 9.542713ms) Feb 25 23:39:49.867: INFO: (13) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 9.4443ms) Feb 25 23:39:49.871: INFO: (13) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 12.972612ms) Feb 25 23:39:49.871: INFO: (13) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 13.688443ms) Feb 25 23:39:49.872: INFO: (13) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 14.593695ms) Feb 25 23:39:49.872: INFO: (13) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 14.34768ms) Feb 25 23:39:49.872: INFO: (13) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 14.290374ms) Feb 25 23:39:49.873: INFO: (13) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 14.972551ms) Feb 25 23:39:49.873: INFO: (13) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 15.32323ms) Feb 25 23:39:49.874: INFO: (13) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: ... (200; 8.615142ms) Feb 25 23:39:49.883: INFO: (14) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 8.614727ms) Feb 25 23:39:49.883: INFO: (14) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 8.692926ms) Feb 25 23:39:49.883: INFO: (14) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 8.986777ms) Feb 25 23:39:49.885: INFO: (14) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 10.661732ms) Feb 25 23:39:49.885: INFO: (14) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 10.542472ms) Feb 25 23:39:49.885: INFO: (14) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 11.333782ms) Feb 25 23:39:49.886: INFO: (14) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 11.37064ms) Feb 25 23:39:49.886: INFO: (14) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 11.050206ms) Feb 25 23:39:49.886: INFO: (14) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 11.022451ms) Feb 25 23:39:49.896: INFO: (15) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 10.370622ms) Feb 25 23:39:49.897: INFO: (15) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 10.5454ms) Feb 25 23:39:49.897: INFO: (15) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 11.124684ms) Feb 25 23:39:49.898: INFO: (15) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 12.003037ms) Feb 25 23:39:49.898: INFO: (15) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 13.341846ms) Feb 25 23:39:49.899: INFO: (15) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 13.615873ms) Feb 25 23:39:49.900: INFO: (15) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 14.207862ms) Feb 25 23:39:49.900: INFO: (15) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 13.985424ms) Feb 25 23:39:49.901: INFO: (15) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 14.881288ms) Feb 25 23:39:49.901: INFO: (15) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 14.945908ms) Feb 25 23:39:49.910: INFO: (16) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 8.557418ms) Feb 25 23:39:49.910: INFO: (16) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 8.322296ms) Feb 25 23:39:49.910: INFO: (16) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 9.318784ms) Feb 25 23:39:49.910: INFO: (16) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 9.185096ms) Feb 25 23:39:49.911: INFO: (16) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 9.767012ms) Feb 25 23:39:49.911: INFO: (16) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 9.922321ms) Feb 25 23:39:49.911: INFO: (16) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 13.434167ms) Feb 25 23:39:49.915: INFO: (16) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 13.409426ms) Feb 25 23:39:49.915: INFO: (16) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 14.12111ms) Feb 25 23:39:49.916: INFO: (16) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 14.537608ms) Feb 25 23:39:49.916: INFO: (16) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 14.556935ms) Feb 25 23:39:49.916: INFO: (16) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 14.610282ms) Feb 25 23:39:49.917: INFO: (16) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 15.967046ms) Feb 25 23:39:49.917: INFO: (16) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 16.229866ms) Feb 25 23:39:49.927: INFO: (17) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 9.83538ms) Feb 25 23:39:49.928: INFO: (17) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 10.817479ms) Feb 25 23:39:49.929: INFO: (17) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 10.732827ms) Feb 25 23:39:49.929: INFO: (17) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 11.241336ms) Feb 25 23:39:49.929: INFO: (17) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 11.066844ms) Feb 25 23:39:49.930: INFO: (17) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 12.344045ms) Feb 25 23:39:49.930: INFO: (17) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 12.878205ms) Feb 25 23:39:49.930: INFO: (17) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: test (200; 12.793264ms) Feb 25 23:39:49.931: INFO: (17) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 13.055013ms) Feb 25 23:39:49.931: INFO: (17) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 13.562235ms) Feb 25 23:39:49.931: INFO: (17) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 13.419113ms) Feb 25 23:39:49.931: INFO: (17) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 13.215154ms) Feb 25 23:39:49.931: INFO: (17) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 13.675042ms) Feb 25 23:39:49.936: INFO: (17) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 17.998471ms) Feb 25 23:39:49.936: INFO: (17) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 18.409416ms) Feb 25 23:39:49.946: INFO: (18) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 9.196144ms) Feb 25 23:39:49.947: INFO: (18) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 9.29057ms) Feb 25 23:39:49.947: INFO: (18) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:460/proxy/: tls baz (200; 9.853206ms) Feb 25 23:39:49.948: INFO: (18) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 11.265407ms) Feb 25 23:39:49.949: INFO: (18) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:1080/proxy/: ... (200; 11.138107ms) Feb 25 23:39:49.950: INFO: (18) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 12.328305ms) Feb 25 23:39:49.950: INFO: (18) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 13.950175ms) Feb 25 23:39:49.951: INFO: (18) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 13.905973ms) Feb 25 23:39:49.952: INFO: (18) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 15.202395ms) Feb 25 23:39:49.952: INFO: (18) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 15.42388ms) Feb 25 23:39:49.952: INFO: (18) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname2/proxy/: tls qux (200; 15.225699ms) Feb 25 23:39:49.954: INFO: (18) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: ... (200; 19.774895ms) Feb 25 23:39:49.983: INFO: (19) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 19.363084ms) Feb 25 23:39:49.983: INFO: (19) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:1080/proxy/: test<... (200; 19.985108ms) Feb 25 23:39:49.983: INFO: (19) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 19.459421ms) Feb 25 23:39:49.985: INFO: (19) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname1/proxy/: foo (200; 22.422945ms) Feb 25 23:39:49.986: INFO: (19) /api/v1/namespaces/proxy-2536/services/proxy-service-7ltcp:portname2/proxy/: bar (200; 23.737277ms) Feb 25 23:39:49.986: INFO: (19) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx/proxy/: test (200; 22.644349ms) Feb 25 23:39:49.986: INFO: (19) /api/v1/namespaces/proxy-2536/pods/proxy-service-7ltcp-ps7lx:162/proxy/: bar (200; 23.497518ms) Feb 25 23:39:49.987: INFO: (19) /api/v1/namespaces/proxy-2536/pods/http:proxy-service-7ltcp-ps7lx:160/proxy/: foo (200; 24.570896ms) Feb 25 23:39:49.987: INFO: (19) /api/v1/namespaces/proxy-2536/services/https:proxy-service-7ltcp:tlsportname1/proxy/: tls baz (200; 24.318064ms) Feb 25 23:39:49.988: INFO: (19) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname1/proxy/: foo (200; 25.240412ms) Feb 25 23:39:49.989: INFO: (19) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:462/proxy/: tls qux (200; 26.622439ms) Feb 25 23:39:49.989: INFO: (19) /api/v1/namespaces/proxy-2536/services/http:proxy-service-7ltcp:portname2/proxy/: bar (200; 25.957133ms) Feb 25 23:39:49.989: INFO: (19) /api/v1/namespaces/proxy-2536/pods/https:proxy-service-7ltcp-ps7lx:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Feb 25 23:40:02.913: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:40:22.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2012" for this suite. • [SLOW TEST:19.597 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":5,"skipped":26,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:40:22.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 25 23:40:22.422: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:40:23.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-561" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":280,"completed":6,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:40:23.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 25 23:40:23.809: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:40:25.816: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:40:27.815: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:40:29.841: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:31.817: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:33.831: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:35.816: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:37.816: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:39.835: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:41.827: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:43.818: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:45.816: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:47.817: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:49.823: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = false) Feb 25 23:40:51.818: INFO: The status of Pod test-webserver-8dec9948-69a1-4528-a00c-63726d7cf92d is Running (Ready = true) Feb 25 23:40:51.826: INFO: Container started at 2020-02-25 23:40:29 +0000 UTC, pod became ready at 2020-02-25 23:40:50 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:40:51.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6778" for this suite. • [SLOW TEST:28.306 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":7,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:40:51.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-108070b0-c047-4f5d-b9b8-3c6dab4cc93d STEP: Creating a pod to test consume configMaps Feb 25 23:40:51.995: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be" in namespace "configmap-39" to be "success or failure" Feb 25 23:40:52.075: INFO: Pod "pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be": Phase="Pending", Reason="", readiness=false. Elapsed: 79.354369ms Feb 25 23:40:54.124: INFO: Pod "pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129127439s Feb 25 23:40:56.134: INFO: Pod "pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138225003s Feb 25 23:40:58.141: INFO: Pod "pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145710486s Feb 25 23:41:00.149: INFO: Pod "pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153505737s Feb 25 23:41:02.162: INFO: Pod "pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.166370312s STEP: Saw pod success Feb 25 23:41:02.162: INFO: Pod "pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be" satisfied condition "success or failure" Feb 25 23:41:02.166: INFO: Trying to get logs from node jerma-node pod pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be container configmap-volume-test: STEP: delete the pod Feb 25 23:41:02.299: INFO: Waiting for pod pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be to disappear Feb 25 23:41:02.323: INFO: Pod pod-configmaps-8b3f9645-a8a1-4da8-bad0-bcc899bc77be no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:41:02.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-39" for this suite. • [SLOW TEST:10.493 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":8,"skipped":77,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:41:02.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6338.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6338.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 25 23:41:14.604: INFO: DNS probes using dns-test-4680c0d4-501e-414e-b562-1913229d3864 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6338.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6338.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 25 23:41:31.054: INFO: File wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local from pod dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 25 23:41:31.068: INFO: File jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local from pod dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 25 23:41:31.068: INFO: Lookups using dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a failed for: [wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local] Feb 25 23:41:36.081: INFO: File wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local from pod dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 25 23:41:36.086: INFO: File jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local from pod dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 25 23:41:36.086: INFO: Lookups using dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a failed for: [wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local] Feb 25 23:41:41.077: INFO: File wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local from pod dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 25 23:41:41.081: INFO: File jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local from pod dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 25 23:41:41.081: INFO: Lookups using dns-6338/dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a failed for: [wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local] Feb 25 23:41:46.080: INFO: DNS probes using dns-test-693db8d0-9324-4f8a-b43f-f1061e0bf25a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6338.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6338.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6338.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6338.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 25 23:42:04.431: INFO: DNS probes using dns-test-c623a28a-b221-4e31-a023-1111103a7435 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:42:04.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6338" for this suite. • [SLOW TEST:62.302 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":9,"skipped":77,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:42:04.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 25 23:42:04.785: INFO: PodSpec: initContainers in spec.initContainers Feb 25 23:43:05.451: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9692ee10-24b0-49b2-b91b-f80d5cfc543c", GenerateName:"", Namespace:"init-container-2719", SelfLink:"/api/v1/namespaces/init-container-2719/pods/pod-init-9692ee10-24b0-49b2-b91b-f80d5cfc543c", UID:"118b0715-1807-49d2-94ba-3fb9d0f5d568", ResourceVersion:"10754685", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718270924, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"785102164"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5mt76", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0005a8ac0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5mt76", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5mt76", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5mt76", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c20258), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0030f00c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c202e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001c20300)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001c20308), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001c2030c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270925, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270925, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270925, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270924, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc002c181a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d5a1c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d5a230)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://296f065499bff0d3a70720d1ce9001c5d2bbfa935c5c70bac5f24e3609f1895c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c181e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c181c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001c2038f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:43:05.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2719" for this suite. • [SLOW TEST:60.843 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":10,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:43:05.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 25 23:43:06.610: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 25 23:43:08.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 25 23:43:10.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 25 23:43:12.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 25 23:43:14.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 25 23:43:16.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718270986, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 25 23:43:19.690: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Feb 25 23:43:19.728: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:43:19.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8026" for this suite. STEP: Destroying namespace "webhook-8026-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.510 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":11,"skipped":131,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:43:19.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 25 23:43:20.109: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 25 23:43:24.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9560 create -f -' Feb 25 23:43:29.320: INFO: stderr: "" Feb 25 23:43:29.320: INFO: stdout: "e2e-test-crd-publish-openapi-8540-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 25 23:43:29.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9560 delete e2e-test-crd-publish-openapi-8540-crds test-cr' Feb 25 23:43:30.279: INFO: stderr: "" Feb 25 23:43:30.280: INFO: stdout: "e2e-test-crd-publish-openapi-8540-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 25 23:43:30.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9560 apply -f -' Feb 25 23:43:30.742: INFO: stderr: "" Feb 25 23:43:30.742: INFO: stdout: "e2e-test-crd-publish-openapi-8540-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 25 23:43:30.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9560 delete e2e-test-crd-publish-openapi-8540-crds test-cr' Feb 25 23:43:30.912: INFO: stderr: "" Feb 25 23:43:30.912: INFO: stdout: "e2e-test-crd-publish-openapi-8540-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 25 23:43:30.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8540-crds' Feb 25 23:43:31.257: INFO: stderr: "" Feb 25 23:43:31.257: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8540-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:43:34.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9560" for this suite. • [SLOW TEST:14.951 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":12,"skipped":144,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:43:34.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's command Feb 25 23:43:35.064: INFO: Waiting up to 5m0s for pod "var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360" in namespace "var-expansion-851" to be "success or failure" Feb 25 23:43:35.075: INFO: Pod "var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360": Phase="Pending", Reason="", readiness=false. Elapsed: 10.760025ms Feb 25 23:43:37.083: INFO: Pod "var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01839365s Feb 25 23:43:39.091: INFO: Pod "var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025967003s Feb 25 23:43:41.105: INFO: Pod "var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040722644s Feb 25 23:43:43.113: INFO: Pod "var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048821249s STEP: Saw pod success Feb 25 23:43:43.114: INFO: Pod "var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360" satisfied condition "success or failure" Feb 25 23:43:43.118: INFO: Trying to get logs from node jerma-node pod var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360 container dapi-container: STEP: delete the pod Feb 25 23:43:43.176: INFO: Waiting for pod var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360 to disappear Feb 25 23:43:43.182: INFO: Pod var-expansion-88a7e1a5-9351-424e-afa1-f25fb45f6360 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:43:43.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-851" for this suite. • [SLOW TEST:8.246 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":13,"skipped":150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:43:43.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 25 23:43:43.369: INFO: Waiting up to 5m0s for pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e" in namespace "downward-api-7971" to be "success or failure" Feb 25 23:43:43.382: INFO: Pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.247727ms Feb 25 23:43:45.393: INFO: Pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024508241s Feb 25 23:43:47.402: INFO: Pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032994703s Feb 25 23:43:49.602: INFO: Pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23302073s Feb 25 23:43:51.611: INFO: Pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24231537s Feb 25 23:43:53.625: INFO: Pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.256019017s Feb 25 23:43:55.635: INFO: Pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.266469775s STEP: Saw pod success Feb 25 23:43:55.636: INFO: Pod "downward-api-0750fb8b-60fe-4553-becf-9658be552d8e" satisfied condition "success or failure" Feb 25 23:43:55.642: INFO: Trying to get logs from node jerma-node pod downward-api-0750fb8b-60fe-4553-becf-9658be552d8e container dapi-container: STEP: delete the pod Feb 25 23:43:55.693: INFO: Waiting for pod downward-api-0750fb8b-60fe-4553-becf-9658be552d8e to disappear Feb 25 23:43:55.730: INFO: Pod downward-api-0750fb8b-60fe-4553-becf-9658be552d8e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:43:55.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7971" for this suite. • [SLOW TEST:12.552 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":14,"skipped":211,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:43:55.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Starting the proxy Feb 25 23:43:55.901: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix309250732/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:43:55.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3203" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":280,"completed":15,"skipped":212,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:43:56.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-9z8s STEP: Creating a pod to test atomic-volume-subpath Feb 25 23:43:56.178: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9z8s" in namespace "subpath-7606" to be "success or failure" Feb 25 23:43:56.192: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.921928ms Feb 25 23:43:58.218: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039212585s Feb 25 23:44:00.225: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046356947s Feb 25 23:44:02.247: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068423767s Feb 25 23:44:04.254: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 8.075378999s Feb 25 23:44:06.265: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 10.086588126s Feb 25 23:44:08.288: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 12.109864555s Feb 25 23:44:10.310: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 14.131157619s Feb 25 23:44:12.318: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 16.139519548s Feb 25 23:44:14.324: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 18.144905124s Feb 25 23:44:16.333: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 20.153997947s Feb 25 23:44:18.345: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 22.166519377s Feb 25 23:44:20.354: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 24.175051233s Feb 25 23:44:22.362: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 26.183879553s Feb 25 23:44:24.371: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.192279735s STEP: Saw pod success Feb 25 23:44:24.371: INFO: Pod "pod-subpath-test-configmap-9z8s" satisfied condition "success or failure" Feb 25 23:44:24.375: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-9z8s container test-container-subpath-configmap-9z8s: STEP: delete the pod Feb 25 23:44:24.434: INFO: Waiting for pod pod-subpath-test-configmap-9z8s to disappear Feb 25 23:44:24.443: INFO: Pod pod-subpath-test-configmap-9z8s no longer exists STEP: Deleting pod pod-subpath-test-configmap-9z8s Feb 25 23:44:24.444: INFO: Deleting pod "pod-subpath-test-configmap-9z8s" in namespace "subpath-7606" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:44:24.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7606" for this suite. • [SLOW TEST:28.647 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":16,"skipped":222,"failed":0} S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:44:24.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 25 23:44:33.419: INFO: Successfully updated pod "pod-update-3516db34-2fc7-4aa9-85f3-c82503d979a6" STEP: verifying the updated pod is in kubernetes Feb 25 23:44:33.482: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:44:33.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2863" for this suite. • [SLOW TEST:8.861 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":17,"skipped":223,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:44:33.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-dc3ebb6b-b571-4ee7-a7cf-4da16189e40f STEP: Creating a pod to test consume secrets Feb 25 23:44:33.659: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683" in namespace "projected-8957" to be "success or failure" Feb 25 23:44:33.695: INFO: Pod "pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683": Phase="Pending", Reason="", readiness=false. Elapsed: 35.776084ms Feb 25 23:44:35.705: INFO: Pod "pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046095837s Feb 25 23:44:37.714: INFO: Pod "pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054741503s Feb 25 23:44:39.722: INFO: Pod "pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062220821s Feb 25 23:44:41.734: INFO: Pod "pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074306215s Feb 25 23:44:43.746: INFO: Pod "pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086675752s STEP: Saw pod success Feb 25 23:44:43.746: INFO: Pod "pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683" satisfied condition "success or failure" Feb 25 23:44:43.849: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683 container projected-secret-volume-test: STEP: delete the pod Feb 25 23:44:43.923: INFO: Waiting for pod pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683 to disappear Feb 25 23:44:43.933: INFO: Pod pod-projected-secrets-2fe7ff85-c870-4e47-af96-b19b83dab683 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:44:43.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8957" for this suite. • [SLOW TEST:10.523 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":18,"skipped":226,"failed":0} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:44:44.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-89eecc0a-ead7-4578-b330-06517cc79cfc in namespace container-probe-7292 Feb 25 23:44:52.320: INFO: Started pod liveness-89eecc0a-ead7-4578-b330-06517cc79cfc in namespace container-probe-7292 STEP: checking the pod's current state and verifying that restartCount is present Feb 25 23:44:52.324: INFO: Initial restart count of pod liveness-89eecc0a-ead7-4578-b330-06517cc79cfc is 0 Feb 25 23:45:14.449: INFO: Restart count of pod container-probe-7292/liveness-89eecc0a-ead7-4578-b330-06517cc79cfc is now 1 (22.125124306s elapsed) Feb 25 23:45:32.650: INFO: Restart count of pod container-probe-7292/liveness-89eecc0a-ead7-4578-b330-06517cc79cfc is now 2 (40.326809775s elapsed) Feb 25 23:45:52.811: INFO: Restart count of pod container-probe-7292/liveness-89eecc0a-ead7-4578-b330-06517cc79cfc is now 3 (1m0.487597028s elapsed) Feb 25 23:46:12.913: INFO: Restart count of pod container-probe-7292/liveness-89eecc0a-ead7-4578-b330-06517cc79cfc is now 4 (1m20.589127912s elapsed) Feb 25 23:47:13.316: INFO: Restart count of pod container-probe-7292/liveness-89eecc0a-ead7-4578-b330-06517cc79cfc is now 5 (2m20.992211181s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:47:13.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7292" for this suite. • [SLOW TEST:149.376 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":19,"skipped":229,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:47:13.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the initial replication controller Feb 25 23:47:13.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9906' Feb 25 23:47:14.041: INFO: stderr: "" Feb 25 23:47:14.042: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 25 23:47:14.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9906' Feb 25 23:47:14.308: INFO: stderr: "" Feb 25 23:47:14.308: INFO: stdout: "update-demo-nautilus-6crd2 update-demo-nautilus-9gh4c " Feb 25 23:47:14.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6crd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:47:14.589: INFO: stderr: "" Feb 25 23:47:14.589: INFO: stdout: "" Feb 25 23:47:14.589: INFO: update-demo-nautilus-6crd2 is created but not running Feb 25 23:47:19.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9906' Feb 25 23:47:20.161: INFO: stderr: "" Feb 25 23:47:20.161: INFO: stdout: "update-demo-nautilus-6crd2 update-demo-nautilus-9gh4c " Feb 25 23:47:20.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6crd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:47:20.528: INFO: stderr: "" Feb 25 23:47:20.528: INFO: stdout: "" Feb 25 23:47:20.528: INFO: update-demo-nautilus-6crd2 is created but not running Feb 25 23:47:25.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9906' Feb 25 23:47:25.675: INFO: stderr: "" Feb 25 23:47:25.675: INFO: stdout: "update-demo-nautilus-6crd2 update-demo-nautilus-9gh4c " Feb 25 23:47:25.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6crd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:47:25.810: INFO: stderr: "" Feb 25 23:47:25.810: INFO: stdout: "" Feb 25 23:47:25.810: INFO: update-demo-nautilus-6crd2 is created but not running Feb 25 23:47:30.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9906' Feb 25 23:47:30.994: INFO: stderr: "" Feb 25 23:47:30.994: INFO: stdout: "update-demo-nautilus-6crd2 update-demo-nautilus-9gh4c " Feb 25 23:47:30.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6crd2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:47:31.084: INFO: stderr: "" Feb 25 23:47:31.085: INFO: stdout: "true" Feb 25 23:47:31.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6crd2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:47:31.225: INFO: stderr: "" Feb 25 23:47:31.226: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 25 23:47:31.226: INFO: validating pod update-demo-nautilus-6crd2 Feb 25 23:47:31.239: INFO: got data: { "image": "nautilus.jpg" } Feb 25 23:47:31.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 25 23:47:31.240: INFO: update-demo-nautilus-6crd2 is verified up and running Feb 25 23:47:31.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9gh4c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:47:31.365: INFO: stderr: "" Feb 25 23:47:31.365: INFO: stdout: "true" Feb 25 23:47:31.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9gh4c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:47:31.446: INFO: stderr: "" Feb 25 23:47:31.447: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 25 23:47:31.447: INFO: validating pod update-demo-nautilus-9gh4c Feb 25 23:47:31.487: INFO: got data: { "image": "nautilus.jpg" } Feb 25 23:47:31.488: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 25 23:47:31.488: INFO: update-demo-nautilus-9gh4c is verified up and running STEP: rolling-update to new replication controller Feb 25 23:47:31.493: INFO: scanned /root for discovery docs: Feb 25 23:47:31.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9906' Feb 25 23:48:01.379: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 25 23:48:01.379: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 25 23:48:01.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9906' Feb 25 23:48:01.504: INFO: stderr: "" Feb 25 23:48:01.504: INFO: stdout: "update-demo-kitten-ctfpq update-demo-kitten-h5f8f update-demo-nautilus-9gh4c " STEP: Replicas for name=update-demo: expected=2 actual=3 Feb 25 23:48:06.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9906' Feb 25 23:48:06.679: INFO: stderr: "" Feb 25 23:48:06.680: INFO: stdout: "update-demo-kitten-ctfpq update-demo-kitten-h5f8f " Feb 25 23:48:06.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ctfpq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:48:06.781: INFO: stderr: "" Feb 25 23:48:06.781: INFO: stdout: "true" Feb 25 23:48:06.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ctfpq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:48:06.882: INFO: stderr: "" Feb 25 23:48:06.882: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 25 23:48:06.882: INFO: validating pod update-demo-kitten-ctfpq Feb 25 23:48:06.889: INFO: got data: { "image": "kitten.jpg" } Feb 25 23:48:06.890: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 25 23:48:06.890: INFO: update-demo-kitten-ctfpq is verified up and running Feb 25 23:48:06.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h5f8f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:48:06.998: INFO: stderr: "" Feb 25 23:48:06.999: INFO: stdout: "true" Feb 25 23:48:06.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h5f8f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9906' Feb 25 23:48:07.166: INFO: stderr: "" Feb 25 23:48:07.166: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 25 23:48:07.166: INFO: validating pod update-demo-kitten-h5f8f Feb 25 23:48:07.176: INFO: got data: { "image": "kitten.jpg" } Feb 25 23:48:07.176: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 25 23:48:07.176: INFO: update-demo-kitten-h5f8f is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:48:07.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9906" for this suite. • [SLOW TEST:53.775 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":280,"completed":20,"skipped":240,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:48:07.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-6661 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 25 23:48:07.349: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 25 23:48:07.477: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:48:09.542: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:48:11.483: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:48:13.615: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:48:15.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:48:17.549: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:19.487: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:21.729: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:23.486: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:25.483: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:27.487: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:29.486: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:31.484: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:33.485: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:48:36.176: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 25 23:48:36.189: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 25 23:48:46.330: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6661 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 25 23:48:46.330: INFO: >>> kubeConfig: /root/.kube/config I0225 23:48:46.396835 9 log.go:172] (0xc002860790) (0xc001efd7c0) Create stream I0225 23:48:46.397182 9 log.go:172] (0xc002860790) (0xc001efd7c0) Stream added, broadcasting: 1 I0225 23:48:46.402381 9 log.go:172] (0xc002860790) Reply frame received for 1 I0225 23:48:46.402444 9 log.go:172] (0xc002860790) (0xc002372780) Create stream I0225 23:48:46.402460 9 log.go:172] (0xc002860790) (0xc002372780) Stream added, broadcasting: 3 I0225 23:48:46.404668 9 log.go:172] (0xc002860790) Reply frame received for 3 I0225 23:48:46.404881 9 log.go:172] (0xc002860790) (0xc002372820) Create stream I0225 23:48:46.404908 9 log.go:172] (0xc002860790) (0xc002372820) Stream added, broadcasting: 5 I0225 23:48:46.409232 9 log.go:172] (0xc002860790) Reply frame received for 5 I0225 23:48:47.508425 9 log.go:172] (0xc002860790) Data frame received for 3 I0225 23:48:47.508806 9 log.go:172] (0xc002372780) (3) Data frame handling I0225 23:48:47.508907 9 log.go:172] (0xc002372780) (3) Data frame sent I0225 23:48:47.627115 9 log.go:172] (0xc002860790) (0xc002372780) Stream removed, broadcasting: 3 I0225 23:48:47.627473 9 log.go:172] (0xc002860790) Data frame received for 1 I0225 23:48:47.627545 9 log.go:172] (0xc001efd7c0) (1) Data frame handling I0225 23:48:47.627599 9 log.go:172] (0xc001efd7c0) (1) Data frame sent I0225 23:48:47.627678 9 log.go:172] (0xc002860790) (0xc001efd7c0) Stream removed, broadcasting: 1 I0225 23:48:47.627754 9 log.go:172] (0xc002860790) (0xc002372820) Stream removed, broadcasting: 5 I0225 23:48:47.628154 9 log.go:172] (0xc002860790) Go away received I0225 23:48:47.630129 9 log.go:172] (0xc002860790) (0xc001efd7c0) Stream removed, broadcasting: 1 I0225 23:48:47.630196 9 log.go:172] (0xc002860790) (0xc002372780) Stream removed, broadcasting: 3 I0225 23:48:47.630224 9 log.go:172] (0xc002860790) (0xc002372820) Stream removed, broadcasting: 5 Feb 25 23:48:47.630: INFO: Found all expected endpoints: [netserver-0] Feb 25 23:48:47.638: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6661 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 25 23:48:47.638: INFO: >>> kubeConfig: /root/.kube/config I0225 23:48:47.680371 9 log.go:172] (0xc0029564d0) (0xc001a740a0) Create stream I0225 23:48:47.680496 9 log.go:172] (0xc0029564d0) (0xc001a740a0) Stream added, broadcasting: 1 I0225 23:48:47.684813 9 log.go:172] (0xc0029564d0) Reply frame received for 1 I0225 23:48:47.684888 9 log.go:172] (0xc0029564d0) (0xc001e3e0a0) Create stream I0225 23:48:47.684909 9 log.go:172] (0xc0029564d0) (0xc001e3e0a0) Stream added, broadcasting: 3 I0225 23:48:47.686242 9 log.go:172] (0xc0029564d0) Reply frame received for 3 I0225 23:48:47.686344 9 log.go:172] (0xc0029564d0) (0xc001a74320) Create stream I0225 23:48:47.686361 9 log.go:172] (0xc0029564d0) (0xc001a74320) Stream added, broadcasting: 5 I0225 23:48:47.688504 9 log.go:172] (0xc0029564d0) Reply frame received for 5 I0225 23:48:48.780797 9 log.go:172] (0xc0029564d0) Data frame received for 3 I0225 23:48:48.780883 9 log.go:172] (0xc001e3e0a0) (3) Data frame handling I0225 23:48:48.780914 9 log.go:172] (0xc001e3e0a0) (3) Data frame sent I0225 23:48:48.876067 9 log.go:172] (0xc0029564d0) Data frame received for 1 I0225 23:48:48.876293 9 log.go:172] (0xc0029564d0) (0xc001e3e0a0) Stream removed, broadcasting: 3 I0225 23:48:48.876404 9 log.go:172] (0xc001a740a0) (1) Data frame handling I0225 23:48:48.876445 9 log.go:172] (0xc001a740a0) (1) Data frame sent I0225 23:48:48.876495 9 log.go:172] (0xc0029564d0) (0xc001a740a0) Stream removed, broadcasting: 1 I0225 23:48:48.876664 9 log.go:172] (0xc0029564d0) (0xc001a74320) Stream removed, broadcasting: 5 I0225 23:48:48.876845 9 log.go:172] (0xc0029564d0) Go away received I0225 23:48:48.876906 9 log.go:172] (0xc0029564d0) (0xc001a740a0) Stream removed, broadcasting: 1 I0225 23:48:48.876931 9 log.go:172] (0xc0029564d0) (0xc001e3e0a0) Stream removed, broadcasting: 3 I0225 23:48:48.876944 9 log.go:172] (0xc0029564d0) (0xc001a74320) Stream removed, broadcasting: 5 Feb 25 23:48:48.877: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:48:48.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6661" for this suite. • [SLOW TEST:41.715 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":21,"skipped":240,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:48:48.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-90356a66-d9e7-42c8-a681-2fbe320cf731 in namespace container-probe-2314 Feb 25 23:49:05.388: INFO: Started pod test-webserver-90356a66-d9e7-42c8-a681-2fbe320cf731 in namespace container-probe-2314 STEP: checking the pod's current state and verifying that restartCount is present Feb 25 23:49:05.393: INFO: Initial restart count of pod test-webserver-90356a66-d9e7-42c8-a681-2fbe320cf731 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:53:06.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2314" for this suite. • [SLOW TEST:258.139 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":22,"skipped":258,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:53:07.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-21 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 25 23:53:07.208: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 25 23:53:07.244: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:53:09.441: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:53:11.251: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:53:13.485: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:53:15.261: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:53:17.257: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:53:19.254: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:53:21.254: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:53:23.253: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:53:25.252: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:53:27.258: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 25 23:53:29.252: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 25 23:53:29.261: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 25 23:53:31.271: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 25 23:53:33.273: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 25 23:53:35.270: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 25 23:53:47.306: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-21 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 25 23:53:47.307: INFO: >>> kubeConfig: /root/.kube/config I0225 23:53:47.379492 9 log.go:172] (0xc0027c2370) (0xc001eac500) Create stream I0225 23:53:47.379857 9 log.go:172] (0xc0027c2370) (0xc001eac500) Stream added, broadcasting: 1 I0225 23:53:47.385285 9 log.go:172] (0xc0027c2370) Reply frame received for 1 I0225 23:53:47.385336 9 log.go:172] (0xc0027c2370) (0xc0013b6500) Create stream I0225 23:53:47.385356 9 log.go:172] (0xc0027c2370) (0xc0013b6500) Stream added, broadcasting: 3 I0225 23:53:47.387003 9 log.go:172] (0xc0027c2370) Reply frame received for 3 I0225 23:53:47.387041 9 log.go:172] (0xc0027c2370) (0xc0015a00a0) Create stream I0225 23:53:47.387062 9 log.go:172] (0xc0027c2370) (0xc0015a00a0) Stream added, broadcasting: 5 I0225 23:53:47.388324 9 log.go:172] (0xc0027c2370) Reply frame received for 5 I0225 23:53:47.518009 9 log.go:172] (0xc0027c2370) Data frame received for 3 I0225 23:53:47.518276 9 log.go:172] (0xc0013b6500) (3) Data frame handling I0225 23:53:47.518323 9 log.go:172] (0xc0013b6500) (3) Data frame sent I0225 23:53:47.628666 9 log.go:172] (0xc0027c2370) (0xc0013b6500) Stream removed, broadcasting: 3 I0225 23:53:47.629156 9 log.go:172] (0xc0027c2370) Data frame received for 1 I0225 23:53:47.629177 9 log.go:172] (0xc001eac500) (1) Data frame handling I0225 23:53:47.629217 9 log.go:172] (0xc001eac500) (1) Data frame sent I0225 23:53:47.629799 9 log.go:172] (0xc0027c2370) (0xc001eac500) Stream removed, broadcasting: 1 I0225 23:53:47.630069 9 log.go:172] (0xc0027c2370) (0xc0015a00a0) Stream removed, broadcasting: 5 I0225 23:53:47.630201 9 log.go:172] (0xc0027c2370) Go away received I0225 23:53:47.630691 9 log.go:172] (0xc0027c2370) (0xc001eac500) Stream removed, broadcasting: 1 I0225 23:53:47.630745 9 log.go:172] (0xc0027c2370) (0xc0013b6500) Stream removed, broadcasting: 3 I0225 23:53:47.630780 9 log.go:172] (0xc0027c2370) (0xc0015a00a0) Stream removed, broadcasting: 5 Feb 25 23:53:47.631: INFO: Waiting for responses: map[] Feb 25 23:53:47.639: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-21 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 25 23:53:47.639: INFO: >>> kubeConfig: /root/.kube/config I0225 23:53:47.684815 9 log.go:172] (0xc002956370) (0xc0015a0a00) Create stream I0225 23:53:47.685094 9 log.go:172] (0xc002956370) (0xc0015a0a00) Stream added, broadcasting: 1 I0225 23:53:47.692868 9 log.go:172] (0xc002956370) Reply frame received for 1 I0225 23:53:47.693008 9 log.go:172] (0xc002956370) (0xc001e3e640) Create stream I0225 23:53:47.693023 9 log.go:172] (0xc002956370) (0xc001e3e640) Stream added, broadcasting: 3 I0225 23:53:47.696827 9 log.go:172] (0xc002956370) Reply frame received for 3 I0225 23:53:47.697165 9 log.go:172] (0xc002956370) (0xc0015a0b40) Create stream I0225 23:53:47.697212 9 log.go:172] (0xc002956370) (0xc0015a0b40) Stream added, broadcasting: 5 I0225 23:53:47.700164 9 log.go:172] (0xc002956370) Reply frame received for 5 I0225 23:53:47.806961 9 log.go:172] (0xc002956370) Data frame received for 3 I0225 23:53:47.807188 9 log.go:172] (0xc001e3e640) (3) Data frame handling I0225 23:53:47.807270 9 log.go:172] (0xc001e3e640) (3) Data frame sent I0225 23:53:47.928741 9 log.go:172] (0xc002956370) Data frame received for 1 I0225 23:53:47.928934 9 log.go:172] (0xc002956370) (0xc001e3e640) Stream removed, broadcasting: 3 I0225 23:53:47.929012 9 log.go:172] (0xc0015a0a00) (1) Data frame handling I0225 23:53:47.929036 9 log.go:172] (0xc0015a0a00) (1) Data frame sent I0225 23:53:47.929092 9 log.go:172] (0xc002956370) (0xc0015a0b40) Stream removed, broadcasting: 5 I0225 23:53:47.929167 9 log.go:172] (0xc002956370) (0xc0015a0a00) Stream removed, broadcasting: 1 I0225 23:53:47.929221 9 log.go:172] (0xc002956370) Go away received I0225 23:53:47.930293 9 log.go:172] (0xc002956370) (0xc0015a0a00) Stream removed, broadcasting: 1 I0225 23:53:47.930462 9 log.go:172] (0xc002956370) (0xc001e3e640) Stream removed, broadcasting: 3 I0225 23:53:47.930480 9 log.go:172] (0xc002956370) (0xc0015a0b40) Stream removed, broadcasting: 5 Feb 25 23:53:47.930: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:53:47.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-21" for this suite. • [SLOW TEST:40.905 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":23,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:53:47.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:54:05.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3168" for this suite. • [SLOW TEST:17.308 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":24,"skipped":362,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:54:05.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 25 23:54:05.369: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 25 23:54:05.387: INFO: Waiting for terminating namespaces to be deleted... Feb 25 23:54:05.392: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 25 23:54:05.419: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 25 23:54:05.419: INFO: Container kube-proxy ready: true, restart count 0 Feb 25 23:54:05.419: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 25 23:54:05.419: INFO: Container weave ready: true, restart count 1 Feb 25 23:54:05.419: INFO: Container weave-npc ready: true, restart count 0 Feb 25 23:54:05.419: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 25 23:54:05.459: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 25 23:54:05.459: INFO: Container kube-controller-manager ready: true, restart count 19 Feb 25 23:54:05.459: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 25 23:54:05.459: INFO: Container kube-proxy ready: true, restart count 0 Feb 25 23:54:05.459: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 25 23:54:05.459: INFO: Container weave ready: true, restart count 0 Feb 25 23:54:05.459: INFO: Container weave-npc ready: true, restart count 0 Feb 25 23:54:05.459: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 25 23:54:05.459: INFO: Container kube-scheduler ready: true, restart count 25 Feb 25 23:54:05.459: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 25 23:54:05.459: INFO: Container kube-apiserver ready: true, restart count 1 Feb 25 23:54:05.459: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 25 23:54:05.459: INFO: Container etcd ready: true, restart count 1 Feb 25 23:54:05.459: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 25 23:54:05.459: INFO: Container coredns ready: true, restart count 0 Feb 25 23:54:05.459: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 25 23:54:05.459: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Feb 25 23:54:07.020: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 25 23:54:07.020: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 25 23:54:07.020: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 25 23:54:07.020: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Feb 25 23:54:07.020: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Feb 25 23:54:07.020: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 25 23:54:07.020: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Feb 25 23:54:07.020: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 25 23:54:07.020: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Feb 25 23:54:07.020: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Feb 25 23:54:07.020: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Feb 25 23:54:07.098: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-d16fe087-0786-4a79-a360-6c4103500e54.15f6ca63621b5469], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5926/filler-pod-d16fe087-0786-4a79-a360-6c4103500e54 to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-d16fe087-0786-4a79-a360-6c4103500e54.15f6ca648faa0409], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d16fe087-0786-4a79-a360-6c4103500e54.15f6ca655f749f9f], Reason = [Created], Message = [Created container filler-pod-d16fe087-0786-4a79-a360-6c4103500e54] STEP: Considering event: Type = [Normal], Name = [filler-pod-d16fe087-0786-4a79-a360-6c4103500e54.15f6ca6581eb1bc5], Reason = [Started], Message = [Started container filler-pod-d16fe087-0786-4a79-a360-6c4103500e54] STEP: Considering event: Type = [Normal], Name = [filler-pod-d2c46912-59b6-4869-aa3f-bb4bb3ad95ac.15f6ca635f32ae71], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5926/filler-pod-d2c46912-59b6-4869-aa3f-bb4bb3ad95ac to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-d2c46912-59b6-4869-aa3f-bb4bb3ad95ac.15f6ca646388c6a0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d2c46912-59b6-4869-aa3f-bb4bb3ad95ac.15f6ca656c57e81d], Reason = [Created], Message = [Created container filler-pod-d2c46912-59b6-4869-aa3f-bb4bb3ad95ac] STEP: Considering event: Type = [Normal], Name = [filler-pod-d2c46912-59b6-4869-aa3f-bb4bb3ad95ac.15f6ca6589e159ec], Reason = [Started], Message = [Started container filler-pod-d2c46912-59b6-4869-aa3f-bb4bb3ad95ac] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f6ca6630c336f9], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:54:20.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5926" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:15.301 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":280,"completed":25,"skipped":367,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:54:20.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 25 23:54:20.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9486' Feb 25 23:54:23.760: INFO: stderr: "" Feb 25 23:54:23.760: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Feb 25 23:54:38.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9486 -o json' Feb 25 23:54:38.958: INFO: stderr: "" Feb 25 23:54:38.959: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-25T23:54:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9486\",\n \"resourceVersion\": \"10756873\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9486/pods/e2e-test-httpd-pod\",\n \"uid\": \"bf4dd0c4-32d6-4be9-8447-927fcf66010f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rrjst\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-server-mvvl6gufaqub\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rrjst\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rrjst\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-25T23:54:23Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-25T23:54:34Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-25T23:54:34Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-25T23:54:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://3b7be156debe2c2c68a97384f96fc0f533921f2bef84bf73ccb34090b0f17176\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-25T23:54:32Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.234\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.5\",\n \"podIPs\": [\n {\n \"ip\": \"10.32.0.5\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-25T23:54:23Z\"\n }\n}\n" STEP: replace the image in the pod Feb 25 23:54:38.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9486' Feb 25 23:54:39.366: INFO: stderr: "" Feb 25 23:54:39.367: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904 Feb 25 23:54:39.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9486' Feb 25 23:54:45.885: INFO: stderr: "" Feb 25 23:54:45.885: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:54:45.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9486" for this suite. • [SLOW TEST:25.334 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":280,"completed":26,"skipped":371,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:54:45.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 25 23:54:45.996: INFO: Waiting up to 5m0s for pod "downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45" in namespace "downward-api-846" to be "success or failure" Feb 25 23:54:46.017: INFO: Pod "downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45": Phase="Pending", Reason="", readiness=false. Elapsed: 21.223729ms Feb 25 23:54:48.029: INFO: Pod "downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032728271s Feb 25 23:54:50.039: INFO: Pod "downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043319446s Feb 25 23:54:52.046: INFO: Pod "downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050606703s Feb 25 23:54:54.062: INFO: Pod "downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065703267s STEP: Saw pod success Feb 25 23:54:54.063: INFO: Pod "downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45" satisfied condition "success or failure" Feb 25 23:54:54.069: INFO: Trying to get logs from node jerma-node pod downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45 container dapi-container: STEP: delete the pod Feb 25 23:54:54.164: INFO: Waiting for pod downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45 to disappear Feb 25 23:54:54.170: INFO: Pod downward-api-43ffec0e-0290-477f-a4c7-782c7f781d45 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:54:54.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-846" for this suite. • [SLOW TEST:8.296 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":27,"skipped":382,"failed":0} SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:54:54.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service endpoint-test2 in namespace services-6478 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6478 to expose endpoints map[] Feb 25 23:54:54.528: INFO: successfully validated that service endpoint-test2 in namespace services-6478 exposes endpoints map[] (42.209794ms elapsed) STEP: Creating pod pod1 in namespace services-6478 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6478 to expose endpoints map[pod1:[80]] Feb 25 23:54:58.690: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.116603691s elapsed, will retry) Feb 25 23:55:01.767: INFO: successfully validated that service endpoint-test2 in namespace services-6478 exposes endpoints map[pod1:[80]] (7.193104923s elapsed) STEP: Creating pod pod2 in namespace services-6478 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6478 to expose endpoints map[pod1:[80] pod2:[80]] Feb 25 23:55:06.133: INFO: Unexpected endpoints: found map[a537cc13-9da9-4fe4-8818-4b55e8446f11:[80]], expected map[pod1:[80] pod2:[80]] (4.353719371s elapsed, will retry) Feb 25 23:55:08.197: INFO: successfully validated that service endpoint-test2 in namespace services-6478 exposes endpoints map[pod1:[80] pod2:[80]] (6.417119729s elapsed) STEP: Deleting pod pod1 in namespace services-6478 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6478 to expose endpoints map[pod2:[80]] Feb 25 23:55:08.234: INFO: successfully validated that service endpoint-test2 in namespace services-6478 exposes endpoints map[pod2:[80]] (30.91985ms elapsed) STEP: Deleting pod pod2 in namespace services-6478 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6478 to expose endpoints map[] Feb 25 23:55:08.254: INFO: successfully validated that service endpoint-test2 in namespace services-6478 exposes endpoints map[] (6.442291ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:55:08.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6478" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.248 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":280,"completed":28,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:55:08.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 25 23:55:08.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17" in namespace "projected-2337" to be "success or failure" Feb 25 23:55:08.667: INFO: Pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17": Phase="Pending", Reason="", readiness=false. Elapsed: 121.044463ms Feb 25 23:55:10.678: INFO: Pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13212714s Feb 25 23:55:12.686: INFO: Pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140508548s Feb 25 23:55:14.708: INFO: Pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162384879s Feb 25 23:55:16.715: INFO: Pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169189127s Feb 25 23:55:18.723: INFO: Pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177192502s Feb 25 23:55:20.734: INFO: Pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.188386476s STEP: Saw pod success Feb 25 23:55:20.734: INFO: Pod "downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17" satisfied condition "success or failure" Feb 25 23:55:20.740: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17 container client-container: STEP: delete the pod Feb 25 23:55:20.816: INFO: Waiting for pod downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17 to disappear Feb 25 23:55:20.883: INFO: Pod downwardapi-volume-370d373c-d277-4eed-8173-1c58e7605d17 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:55:20.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2337" for this suite. • [SLOW TEST:12.450 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":29,"skipped":424,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:55:20.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 25 23:55:21.092: INFO: Created pod &Pod{ObjectMeta:{dns-5398 dns-5398 /api/v1/namespaces/dns-5398/pods/dns-5398 77999720-215a-44cf-b177-c536b23fcab8 10757090 0 2020-02-25 23:55:21 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9ltl4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9ltl4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9ltl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 25 23:55:21.102: INFO: The status of Pod dns-5398 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:55:23.109: INFO: The status of Pod dns-5398 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:55:25.108: INFO: The status of Pod dns-5398 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:55:27.111: INFO: The status of Pod dns-5398 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:55:29.192: INFO: The status of Pod dns-5398 is Pending, waiting for it to be Running (with Ready = true) Feb 25 23:55:31.109: INFO: The status of Pod dns-5398 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Feb 25 23:55:31.110: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5398 PodName:dns-5398 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 25 23:55:31.110: INFO: >>> kubeConfig: /root/.kube/config I0225 23:55:31.177128 9 log.go:172] (0xc002957e40) (0xc0023fafa0) Create stream I0225 23:55:31.177423 9 log.go:172] (0xc002957e40) (0xc0023fafa0) Stream added, broadcasting: 1 I0225 23:55:31.185351 9 log.go:172] (0xc002957e40) Reply frame received for 1 I0225 23:55:31.185426 9 log.go:172] (0xc002957e40) (0xc0018694a0) Create stream I0225 23:55:31.185452 9 log.go:172] (0xc002957e40) (0xc0018694a0) Stream added, broadcasting: 3 I0225 23:55:31.188593 9 log.go:172] (0xc002957e40) Reply frame received for 3 I0225 23:55:31.188745 9 log.go:172] (0xc002957e40) (0xc001d90820) Create stream I0225 23:55:31.188768 9 log.go:172] (0xc002957e40) (0xc001d90820) Stream added, broadcasting: 5 I0225 23:55:31.191079 9 log.go:172] (0xc002957e40) Reply frame received for 5 I0225 23:55:31.298463 9 log.go:172] (0xc002957e40) Data frame received for 3 I0225 23:55:31.298522 9 log.go:172] (0xc0018694a0) (3) Data frame handling I0225 23:55:31.298763 9 log.go:172] (0xc0018694a0) (3) Data frame sent I0225 23:55:31.379607 9 log.go:172] (0xc002957e40) Data frame received for 1 I0225 23:55:31.379861 9 log.go:172] (0xc0023fafa0) (1) Data frame handling I0225 23:55:31.380263 9 log.go:172] (0xc0023fafa0) (1) Data frame sent I0225 23:55:31.380394 9 log.go:172] (0xc002957e40) (0xc0023fafa0) Stream removed, broadcasting: 1 I0225 23:55:31.380822 9 log.go:172] (0xc002957e40) (0xc001d90820) Stream removed, broadcasting: 5 I0225 23:55:31.380967 9 log.go:172] (0xc002957e40) (0xc0018694a0) Stream removed, broadcasting: 3 I0225 23:55:31.381042 9 log.go:172] (0xc002957e40) (0xc0023fafa0) Stream removed, broadcasting: 1 I0225 23:55:31.381053 9 log.go:172] (0xc002957e40) (0xc0018694a0) Stream removed, broadcasting: 3 I0225 23:55:31.381119 9 log.go:172] (0xc002957e40) (0xc001d90820) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... I0225 23:55:31.381200 9 log.go:172] (0xc002957e40) Go away received Feb 25 23:55:31.381: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5398 PodName:dns-5398 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 25 23:55:31.381: INFO: >>> kubeConfig: /root/.kube/config I0225 23:55:31.441930 9 log.go:172] (0xc002875970) (0xc0018697c0) Create stream I0225 23:55:31.442138 9 log.go:172] (0xc002875970) (0xc0018697c0) Stream added, broadcasting: 1 I0225 23:55:31.450893 9 log.go:172] (0xc002875970) Reply frame received for 1 I0225 23:55:31.450960 9 log.go:172] (0xc002875970) (0xc001d908c0) Create stream I0225 23:55:31.450983 9 log.go:172] (0xc002875970) (0xc001d908c0) Stream added, broadcasting: 3 I0225 23:55:31.455035 9 log.go:172] (0xc002875970) Reply frame received for 3 I0225 23:55:31.455197 9 log.go:172] (0xc002875970) (0xc001ead680) Create stream I0225 23:55:31.455214 9 log.go:172] (0xc002875970) (0xc001ead680) Stream added, broadcasting: 5 I0225 23:55:31.456574 9 log.go:172] (0xc002875970) Reply frame received for 5 I0225 23:55:31.538158 9 log.go:172] (0xc002875970) Data frame received for 3 I0225 23:55:31.538348 9 log.go:172] (0xc001d908c0) (3) Data frame handling I0225 23:55:31.538374 9 log.go:172] (0xc001d908c0) (3) Data frame sent I0225 23:55:31.625058 9 log.go:172] (0xc002875970) Data frame received for 1 I0225 23:55:31.625419 9 log.go:172] (0xc002875970) (0xc001d908c0) Stream removed, broadcasting: 3 I0225 23:55:31.625716 9 log.go:172] (0xc0018697c0) (1) Data frame handling I0225 23:55:31.625783 9 log.go:172] (0xc0018697c0) (1) Data frame sent I0225 23:55:31.625860 9 log.go:172] (0xc002875970) (0xc001ead680) Stream removed, broadcasting: 5 I0225 23:55:31.625929 9 log.go:172] (0xc002875970) (0xc0018697c0) Stream removed, broadcasting: 1 I0225 23:55:31.625980 9 log.go:172] (0xc002875970) Go away received I0225 23:55:31.626756 9 log.go:172] (0xc002875970) (0xc0018697c0) Stream removed, broadcasting: 1 I0225 23:55:31.626796 9 log.go:172] (0xc002875970) (0xc001d908c0) Stream removed, broadcasting: 3 I0225 23:55:31.626819 9 log.go:172] (0xc002875970) (0xc001ead680) Stream removed, broadcasting: 5 Feb 25 23:55:31.626: INFO: Deleting pod dns-5398... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:55:31.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5398" for this suite. • [SLOW TEST:10.756 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":30,"skipped":437,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:55:31.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-60178426-6a4b-4a78-b7aa-1f520dd30f53 in namespace container-probe-8463 Feb 25 23:55:42.035: INFO: Started pod busybox-60178426-6a4b-4a78-b7aa-1f520dd30f53 in namespace container-probe-8463 STEP: checking the pod's current state and verifying that restartCount is present Feb 25 23:55:42.040: INFO: Initial restart count of pod busybox-60178426-6a4b-4a78-b7aa-1f520dd30f53 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:59:42.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8463" for this suite. • [SLOW TEST:251.186 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":31,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:59:42.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 25 23:59:43.000: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e837d858-33a0-4cf0-a000-9ce228eafa88" in namespace "security-context-test-4562" to be "success or failure" Feb 25 23:59:43.007: INFO: Pod "busybox-user-65534-e837d858-33a0-4cf0-a000-9ce228eafa88": Phase="Pending", Reason="", readiness=false. Elapsed: 7.133569ms Feb 25 23:59:45.018: INFO: Pod "busybox-user-65534-e837d858-33a0-4cf0-a000-9ce228eafa88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017780168s Feb 25 23:59:47.027: INFO: Pod "busybox-user-65534-e837d858-33a0-4cf0-a000-9ce228eafa88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027127509s Feb 25 23:59:49.037: INFO: Pod "busybox-user-65534-e837d858-33a0-4cf0-a000-9ce228eafa88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036987935s Feb 25 23:59:51.046: INFO: Pod "busybox-user-65534-e837d858-33a0-4cf0-a000-9ce228eafa88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045509903s Feb 25 23:59:53.060: INFO: Pod "busybox-user-65534-e837d858-33a0-4cf0-a000-9ce228eafa88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059200774s Feb 25 23:59:53.060: INFO: Pod "busybox-user-65534-e837d858-33a0-4cf0-a000-9ce228eafa88" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 25 23:59:53.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4562" for this suite. • [SLOW TEST:10.223 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":32,"skipped":461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 25 23:59:53.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-29d66060-31b8-4d60-8e51-14d21a25be67 STEP: Creating a pod to test consume configMaps Feb 25 23:59:53.352: INFO: Waiting up to 5m0s for pod "pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181" in namespace "configmap-6701" to be "success or failure" Feb 25 23:59:53.366: INFO: Pod "pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181": Phase="Pending", Reason="", readiness=false. Elapsed: 13.638545ms Feb 25 23:59:55.373: INFO: Pod "pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020329325s Feb 25 23:59:57.383: INFO: Pod "pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030246943s Feb 25 23:59:59.391: INFO: Pod "pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038461061s Feb 26 00:00:01.580: INFO: Pod "pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227808487s Feb 26 00:00:03.597: INFO: Pod "pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.244175869s STEP: Saw pod success Feb 26 00:00:03.597: INFO: Pod "pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181" satisfied condition "success or failure" Feb 26 00:00:03.607: INFO: Trying to get logs from node jerma-node pod pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181 container configmap-volume-test: STEP: delete the pod Feb 26 00:00:03.990: INFO: Waiting for pod pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181 to disappear Feb 26 00:00:04.015: INFO: Pod pod-configmaps-63b7783d-9a86-46ec-bad2-673efba35181 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:00:04.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6701" for this suite. • [SLOW TEST:10.963 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":33,"skipped":490,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:00:04.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 26 00:00:04.309: INFO: Waiting up to 5m0s for pod "downward-api-435a643d-5f7b-47e2-a055-2212b279ae70" in namespace "downward-api-4591" to be "success or failure" Feb 26 00:00:04.365: INFO: Pod "downward-api-435a643d-5f7b-47e2-a055-2212b279ae70": Phase="Pending", Reason="", readiness=false. Elapsed: 55.572259ms Feb 26 00:00:06.372: INFO: Pod "downward-api-435a643d-5f7b-47e2-a055-2212b279ae70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062286739s Feb 26 00:00:08.381: INFO: Pod "downward-api-435a643d-5f7b-47e2-a055-2212b279ae70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071374881s Feb 26 00:00:10.391: INFO: Pod "downward-api-435a643d-5f7b-47e2-a055-2212b279ae70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08198864s Feb 26 00:00:12.399: INFO: Pod "downward-api-435a643d-5f7b-47e2-a055-2212b279ae70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089267106s STEP: Saw pod success Feb 26 00:00:12.399: INFO: Pod "downward-api-435a643d-5f7b-47e2-a055-2212b279ae70" satisfied condition "success or failure" Feb 26 00:00:12.402: INFO: Trying to get logs from node jerma-node pod downward-api-435a643d-5f7b-47e2-a055-2212b279ae70 container dapi-container: STEP: delete the pod Feb 26 00:00:12.466: INFO: Waiting for pod downward-api-435a643d-5f7b-47e2-a055-2212b279ae70 to disappear Feb 26 00:00:12.469: INFO: Pod downward-api-435a643d-5f7b-47e2-a055-2212b279ae70 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:00:12.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4591" for this suite. • [SLOW TEST:8.449 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":34,"skipped":494,"failed":0} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:00:12.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating api versions Feb 26 00:00:12.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 26 00:00:12.933: INFO: stderr: "" Feb 26 00:00:12.933: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:00:12.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7627" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":280,"completed":35,"skipped":494,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:00:12.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Feb 26 00:00:13.375: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:00:31.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9706" for this suite. • [SLOW TEST:18.549 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":36,"skipped":508,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:00:31.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-845.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-845.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-845.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-845.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-845.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-845.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-845.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 26 00:00:43.818: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:43.833: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:43.840: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:43.846: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:43.874: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:43.891: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:43.903: INFO: Unable to read jessie_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:43.931: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:43.950: INFO: Lookups using dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local] Feb 26 00:00:48.958: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:48.963: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:48.967: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:48.971: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:48.982: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:48.995: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:48.999: INFO: Unable to read jessie_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:49.002: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:49.008: INFO: Lookups using dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local] Feb 26 00:00:53.961: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:53.966: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:54.016: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:54.020: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:54.036: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:54.041: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:54.044: INFO: Unable to read jessie_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:54.050: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:54.066: INFO: Lookups using dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local] Feb 26 00:00:58.958: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:58.962: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:58.965: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:58.970: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:58.980: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:58.984: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:58.988: INFO: Unable to read jessie_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:58.991: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:00:58.998: INFO: Lookups using dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local] Feb 26 00:01:03.984: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:04.000: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:04.008: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:04.019: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:04.046: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:04.053: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:04.058: INFO: Unable to read jessie_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:04.071: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:04.118: INFO: Lookups using dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local] Feb 26 00:01:08.966: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:08.980: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:08.996: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:09.005: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:09.025: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:09.029: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:09.032: INFO: Unable to read jessie_udp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:09.035: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local from pod dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0: the server could not find the requested resource (get pods dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0) Feb 26 00:01:09.041: INFO: Lookups using dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local wheezy_udp@dns-test-service-2.dns-845.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-845.svc.cluster.local jessie_udp@dns-test-service-2.dns-845.svc.cluster.local jessie_tcp@dns-test-service-2.dns-845.svc.cluster.local] Feb 26 00:01:14.043: INFO: DNS probes using dns-845/dns-test-586d8b2f-bba4-4b32-9a16-371ad86eabe0 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:01:14.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-845" for this suite. • [SLOW TEST:42.797 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":37,"skipped":523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:01:14.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override command Feb 26 00:01:14.542: INFO: Waiting up to 5m0s for pod "client-containers-804ff116-b298-4b52-9679-923b5e52dc5f" in namespace "containers-4694" to be "success or failure" Feb 26 00:01:14.581: INFO: Pod "client-containers-804ff116-b298-4b52-9679-923b5e52dc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.767248ms Feb 26 00:01:16.595: INFO: Pod "client-containers-804ff116-b298-4b52-9679-923b5e52dc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052808563s Feb 26 00:01:18.691: INFO: Pod "client-containers-804ff116-b298-4b52-9679-923b5e52dc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149276464s Feb 26 00:01:20.698: INFO: Pod "client-containers-804ff116-b298-4b52-9679-923b5e52dc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156159138s Feb 26 00:01:22.710: INFO: Pod "client-containers-804ff116-b298-4b52-9679-923b5e52dc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167952739s Feb 26 00:01:24.717: INFO: Pod "client-containers-804ff116-b298-4b52-9679-923b5e52dc5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.174551456s STEP: Saw pod success Feb 26 00:01:24.717: INFO: Pod "client-containers-804ff116-b298-4b52-9679-923b5e52dc5f" satisfied condition "success or failure" Feb 26 00:01:24.720: INFO: Trying to get logs from node jerma-node pod client-containers-804ff116-b298-4b52-9679-923b5e52dc5f container test-container: STEP: delete the pod Feb 26 00:01:24.810: INFO: Waiting for pod client-containers-804ff116-b298-4b52-9679-923b5e52dc5f to disappear Feb 26 00:01:24.830: INFO: Pod client-containers-804ff116-b298-4b52-9679-923b5e52dc5f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:01:24.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4694" for this suite. • [SLOW TEST:10.565 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":38,"skipped":548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:01:24.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 26 00:01:39.130: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 26 00:01:39.153: INFO: Pod pod-with-poststart-exec-hook still exists Feb 26 00:01:41.154: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 26 00:01:41.169: INFO: Pod pod-with-poststart-exec-hook still exists Feb 26 00:01:43.154: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 26 00:01:43.160: INFO: Pod pod-with-poststart-exec-hook still exists Feb 26 00:01:45.154: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 26 00:01:45.160: INFO: Pod pod-with-poststart-exec-hook still exists Feb 26 00:01:47.154: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 26 00:01:47.163: INFO: Pod pod-with-poststart-exec-hook still exists Feb 26 00:01:49.154: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 26 00:01:49.160: INFO: Pod pod-with-poststart-exec-hook still exists Feb 26 00:01:51.154: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 26 00:01:51.161: INFO: Pod pod-with-poststart-exec-hook still exists Feb 26 00:01:53.154: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 26 00:01:53.159: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:01:53.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3250" for this suite. • [SLOW TEST:28.315 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":618,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:01:53.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:01:53.324: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ba40b354-b1a2-4fc3-8fe1-8753ee9ef8e7", Controller:(*bool)(0xc003946a0a), BlockOwnerDeletion:(*bool)(0xc003946a0b)}} Feb 26 00:01:53.401: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7925c6be-bc29-45b8-9b0c-77cba7fb4511", Controller:(*bool)(0xc0038f4f9a), BlockOwnerDeletion:(*bool)(0xc0038f4f9b)}} Feb 26 00:01:53.438: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b2155681-3945-488c-938f-99f502c558c4", Controller:(*bool)(0xc003946bba), BlockOwnerDeletion:(*bool)(0xc003946bbb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:01:58.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6265" for this suite. • [SLOW TEST:5.369 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":40,"skipped":622,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:01:58.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating pod Feb 26 00:02:12.911: INFO: Pod pod-hostip-44d94078-d642-47f8-9a51-f3f84d827011 has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:02:12.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6949" for this suite. • [SLOW TEST:14.385 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":41,"skipped":631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:02:12.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:02:13.485: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:02:15.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:02:17.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:02:19.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:02:21.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272133, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:02:24.592: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:02:24.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2823" for this suite. STEP: Destroying namespace "webhook-2823-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.033 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":42,"skipped":681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:02:24.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-d5wl STEP: Creating a pod to test atomic-volume-subpath Feb 26 00:02:25.088: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-d5wl" in namespace "subpath-1745" to be "success or failure" Feb 26 00:02:25.102: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Pending", Reason="", readiness=false. Elapsed: 13.363775ms Feb 26 00:02:27.109: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020357338s Feb 26 00:02:29.128: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039585302s Feb 26 00:02:31.137: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048680316s Feb 26 00:02:33.176: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087514211s Feb 26 00:02:35.192: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 10.103204421s Feb 26 00:02:37.202: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 12.113624134s Feb 26 00:02:39.209: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 14.120179232s Feb 26 00:02:41.218: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 16.129051518s Feb 26 00:02:43.225: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 18.135955206s Feb 26 00:02:45.243: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 20.154232145s Feb 26 00:02:47.310: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 22.221504432s Feb 26 00:02:49.317: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 24.228880828s Feb 26 00:02:51.424: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 26.335667777s Feb 26 00:02:53.579: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Running", Reason="", readiness=true. Elapsed: 28.489950722s Feb 26 00:02:55.585: INFO: Pod "pod-subpath-test-secret-d5wl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.496628889s STEP: Saw pod success Feb 26 00:02:55.585: INFO: Pod "pod-subpath-test-secret-d5wl" satisfied condition "success or failure" Feb 26 00:02:55.590: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-d5wl container test-container-subpath-secret-d5wl: STEP: delete the pod Feb 26 00:02:55.647: INFO: Waiting for pod pod-subpath-test-secret-d5wl to disappear Feb 26 00:02:55.654: INFO: Pod pod-subpath-test-secret-d5wl no longer exists STEP: Deleting pod pod-subpath-test-secret-d5wl Feb 26 00:02:55.654: INFO: Deleting pod "pod-subpath-test-secret-d5wl" in namespace "subpath-1745" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:02:55.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1745" for this suite. • [SLOW TEST:30.702 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":43,"skipped":714,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:02:55.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 26 00:02:55.817: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8871 /api/v1/namespaces/watch-8871/configmaps/e2e-watch-test-label-changed 8eebe722-1d03-4481-9797-3f7572510baf 10758568 0 2020-02-26 00:02:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 26 00:02:55.818: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8871 /api/v1/namespaces/watch-8871/configmaps/e2e-watch-test-label-changed 8eebe722-1d03-4481-9797-3f7572510baf 10758569 0 2020-02-26 00:02:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 26 00:02:55.819: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8871 /api/v1/namespaces/watch-8871/configmaps/e2e-watch-test-label-changed 8eebe722-1d03-4481-9797-3f7572510baf 10758570 0 2020-02-26 00:02:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 26 00:03:05.925: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8871 /api/v1/namespaces/watch-8871/configmaps/e2e-watch-test-label-changed 8eebe722-1d03-4481-9797-3f7572510baf 10758603 0 2020-02-26 00:02:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 26 00:03:05.925: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8871 /api/v1/namespaces/watch-8871/configmaps/e2e-watch-test-label-changed 8eebe722-1d03-4481-9797-3f7572510baf 10758604 0 2020-02-26 00:02:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 26 00:03:05.925: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8871 /api/v1/namespaces/watch-8871/configmaps/e2e-watch-test-label-changed 8eebe722-1d03-4481-9797-3f7572510baf 10758605 0 2020-02-26 00:02:55 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:03:05.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8871" for this suite. • [SLOW TEST:10.273 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":44,"skipped":719,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:03:05.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:03:06.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4133" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":280,"completed":45,"skipped":731,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:03:06.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:03:06.165: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Feb 26 00:03:09.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6306 create -f -' Feb 26 00:03:12.310: INFO: stderr: "" Feb 26 00:03:12.310: INFO: stdout: "e2e-test-crd-publish-openapi-6824-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 26 00:03:12.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6306 delete e2e-test-crd-publish-openapi-6824-crds test-foo' Feb 26 00:03:12.489: INFO: stderr: "" Feb 26 00:03:12.489: INFO: stdout: "e2e-test-crd-publish-openapi-6824-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 26 00:03:12.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6306 apply -f -' Feb 26 00:03:12.772: INFO: stderr: "" Feb 26 00:03:12.773: INFO: stdout: "e2e-test-crd-publish-openapi-6824-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 26 00:03:12.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6306 delete e2e-test-crd-publish-openapi-6824-crds test-foo' Feb 26 00:03:12.872: INFO: stderr: "" Feb 26 00:03:12.872: INFO: stdout: "e2e-test-crd-publish-openapi-6824-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 26 00:03:12.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6306 create -f -' Feb 26 00:03:13.278: INFO: rc: 1 Feb 26 00:03:13.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6306 apply -f -' Feb 26 00:03:13.645: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Feb 26 00:03:13.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6306 create -f -' Feb 26 00:03:14.105: INFO: rc: 1 Feb 26 00:03:14.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6306 apply -f -' Feb 26 00:03:14.642: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Feb 26 00:03:14.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6824-crds' Feb 26 00:03:14.870: INFO: stderr: "" Feb 26 00:03:14.870: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6824-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Feb 26 00:03:14.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6824-crds.metadata' Feb 26 00:03:15.327: INFO: stderr: "" Feb 26 00:03:15.327: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6824-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 26 00:03:15.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6824-crds.spec' Feb 26 00:03:15.698: INFO: stderr: "" Feb 26 00:03:15.699: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6824-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 26 00:03:15.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6824-crds.spec.bars' Feb 26 00:03:16.101: INFO: stderr: "" Feb 26 00:03:16.102: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6824-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Feb 26 00:03:16.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6824-crds.spec.bars2' Feb 26 00:03:16.561: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:03:20.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6306" for this suite. • [SLOW TEST:13.950 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":46,"skipped":757,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:03:20.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-3853/configmap-test-1656bfb9-f9dd-45a8-b60a-f84dafda92f6 STEP: Creating a pod to test consume configMaps Feb 26 00:03:20.206: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9" in namespace "configmap-3853" to be "success or failure" Feb 26 00:03:20.256: INFO: Pod "pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9": Phase="Pending", Reason="", readiness=false. Elapsed: 50.0662ms Feb 26 00:03:22.264: INFO: Pod "pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057234463s Feb 26 00:03:25.012: INFO: Pod "pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.805423977s Feb 26 00:03:27.019: INFO: Pod "pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81294506s Feb 26 00:03:29.048: INFO: Pod "pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.841405485s Feb 26 00:03:31.056: INFO: Pod "pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.850076987s STEP: Saw pod success Feb 26 00:03:31.057: INFO: Pod "pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9" satisfied condition "success or failure" Feb 26 00:03:31.063: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9 container env-test: STEP: delete the pod Feb 26 00:03:31.222: INFO: Waiting for pod pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9 to disappear Feb 26 00:03:31.231: INFO: Pod pod-configmaps-9b7d97d0-8aed-4a84-aba1-ca5632fa44d9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:03:31.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3853" for this suite. • [SLOW TEST:11.269 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":47,"skipped":773,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:03:31.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-2270/secret-test-c26a7301-4197-478f-9827-bd7e2105ea94 STEP: Creating a pod to test consume secrets Feb 26 00:03:31.584: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060" in namespace "secrets-2270" to be "success or failure" Feb 26 00:03:31.593: INFO: Pod "pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060": Phase="Pending", Reason="", readiness=false. Elapsed: 7.653659ms Feb 26 00:03:33.602: INFO: Pod "pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017185553s Feb 26 00:03:35.613: INFO: Pod "pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027685753s Feb 26 00:03:37.621: INFO: Pod "pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036365331s Feb 26 00:03:39.628: INFO: Pod "pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042862257s Feb 26 00:03:41.648: INFO: Pod "pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063089222s STEP: Saw pod success Feb 26 00:03:41.648: INFO: Pod "pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060" satisfied condition "success or failure" Feb 26 00:03:41.673: INFO: Trying to get logs from node jerma-node pod pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060 container env-test: STEP: delete the pod Feb 26 00:03:41.834: INFO: Waiting for pod pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060 to disappear Feb 26 00:03:41.843: INFO: Pod pod-configmaps-eb23e766-3ebb-4bd2-b668-9b5e52cf5060 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:03:41.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2270" for this suite. • [SLOW TEST:10.565 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":48,"skipped":774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:03:41.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-e8769ae9-1bad-43ca-afe3-773517933b98 STEP: Creating a pod to test consume secrets Feb 26 00:03:41.998: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e" in namespace "projected-3454" to be "success or failure" Feb 26 00:03:42.026: INFO: Pod "pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.561413ms Feb 26 00:03:44.038: INFO: Pod "pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039796559s Feb 26 00:03:46.045: INFO: Pod "pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047570238s Feb 26 00:03:48.052: INFO: Pod "pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054109586s Feb 26 00:03:50.058: INFO: Pod "pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06013242s STEP: Saw pod success Feb 26 00:03:50.058: INFO: Pod "pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e" satisfied condition "success or failure" Feb 26 00:03:50.062: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e container projected-secret-volume-test: STEP: delete the pod Feb 26 00:03:50.117: INFO: Waiting for pod pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e to disappear Feb 26 00:03:50.120: INFO: Pod pod-projected-secrets-b270a402-b6e0-4af8-8ec4-2e1ef85dc95e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:03:50.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3454" for this suite. • [SLOW TEST:8.253 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":49,"skipped":834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:03:50.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-9ca2df97-2a7e-471d-b8a4-9fdc7b7bf075 STEP: Creating a pod to test consume secrets Feb 26 00:03:50.281: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6" in namespace "projected-7529" to be "success or failure" Feb 26 00:03:50.308: INFO: Pod "pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.346482ms Feb 26 00:03:52.315: INFO: Pod "pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034036544s Feb 26 00:03:54.322: INFO: Pod "pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041395343s Feb 26 00:03:56.515: INFO: Pod "pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233629742s Feb 26 00:03:58.526: INFO: Pod "pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.245410801s STEP: Saw pod success Feb 26 00:03:58.527: INFO: Pod "pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6" satisfied condition "success or failure" Feb 26 00:03:58.532: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6 container projected-secret-volume-test: STEP: delete the pod Feb 26 00:03:58.642: INFO: Waiting for pod pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6 to disappear Feb 26 00:03:58.721: INFO: Pod pod-projected-secrets-3866475b-3b1f-4c23-8304-bd5e6de607c6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:03:58.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7529" for this suite. • [SLOW TEST:8.611 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":50,"skipped":858,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:03:58.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:04:00.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Feb 26 00:04:02.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272241, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:04:04.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272241, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:04:06.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272241, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272240, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:04:10.052: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:04:10.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5822" for this suite. STEP: Destroying namespace "webhook-5822-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.461 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":51,"skipped":871,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:04:10.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:04:11.337: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:04:13.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:04:15.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:04:17.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:04:19.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:04:21.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272251, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:04:24.415: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:04:25.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9272" for this suite. STEP: Destroying namespace "webhook-9272-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.144 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":52,"skipped":889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:04:25.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-34f7b0db-8c38-4bbd-b5e4-397c73e65a07 STEP: Creating a pod to test consume configMaps Feb 26 00:04:25.527: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3" in namespace "projected-5804" to be "success or failure" Feb 26 00:04:25.532: INFO: Pod "pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.209314ms Feb 26 00:04:27.538: INFO: Pod "pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011241553s Feb 26 00:04:29.544: INFO: Pod "pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017534197s Feb 26 00:04:31.553: INFO: Pod "pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025918586s Feb 26 00:04:33.561: INFO: Pod "pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033901419s Feb 26 00:04:35.569: INFO: Pod "pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.041921116s STEP: Saw pod success Feb 26 00:04:35.569: INFO: Pod "pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3" satisfied condition "success or failure" Feb 26 00:04:35.575: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3 container projected-configmap-volume-test: STEP: delete the pod Feb 26 00:04:35.839: INFO: Waiting for pod pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3 to disappear Feb 26 00:04:35.938: INFO: Pod pod-projected-configmaps-dd571cca-ccd2-493b-8ea7-425108fb3dc3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:04:35.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5804" for this suite. • [SLOW TEST:10.603 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":53,"skipped":921,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:04:35.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Feb 26 00:04:42.717: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2624 pod-service-account-2e2a7295-e5de-4977-8830-3cc57e3a2170 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 26 00:04:43.186: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2624 pod-service-account-2e2a7295-e5de-4977-8830-3cc57e3a2170 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 26 00:04:43.539: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2624 pod-service-account-2e2a7295-e5de-4977-8830-3cc57e3a2170 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:04:44.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2624" for this suite. • [SLOW TEST:8.185 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":280,"completed":54,"skipped":923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:04:44.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7018 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-7018 Feb 26 00:04:44.312: INFO: Found 0 stateful pods, waiting for 1 Feb 26 00:04:54.321: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 26 00:05:04.323: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 26 00:05:04.365: INFO: Deleting all statefulset in ns statefulset-7018 Feb 26 00:05:04.413: INFO: Scaling statefulset ss to 0 Feb 26 00:05:24.616: INFO: Waiting for statefulset status.replicas updated to 0 Feb 26 00:05:24.625: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:05:24.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7018" for this suite. • [SLOW TEST:40.567 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":55,"skipped":959,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:05:24.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:05:38.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1138" for this suite. • [SLOW TEST:13.402 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":56,"skipped":995,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:05:38.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 26 00:05:38.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e" in namespace "projected-8014" to be "success or failure" Feb 26 00:05:38.240: INFO: Pod "downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.783193ms Feb 26 00:05:40.247: INFO: Pod "downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023724097s Feb 26 00:05:42.251: INFO: Pod "downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027515979s Feb 26 00:05:44.258: INFO: Pod "downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034038296s Feb 26 00:05:46.263: INFO: Pod "downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039520102s STEP: Saw pod success Feb 26 00:05:46.263: INFO: Pod "downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e" satisfied condition "success or failure" Feb 26 00:05:46.267: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e container client-container: STEP: delete the pod Feb 26 00:05:46.315: INFO: Waiting for pod downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e to disappear Feb 26 00:05:46.321: INFO: Pod downwardapi-volume-9cf3656b-7873-47c0-87a7-313cf887f64e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:05:46.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8014" for this suite. • [SLOW TEST:8.213 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":57,"skipped":1016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:05:46.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-9bbf17f8-70b0-413b-93dc-ca453a768eaf in namespace container-probe-2576 Feb 26 00:05:54.535: INFO: Started pod liveness-9bbf17f8-70b0-413b-93dc-ca453a768eaf in namespace container-probe-2576 STEP: checking the pod's current state and verifying that restartCount is present Feb 26 00:05:54.537: INFO: Initial restart count of pod liveness-9bbf17f8-70b0-413b-93dc-ca453a768eaf is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:09:55.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2576" for this suite. • [SLOW TEST:249.474 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":58,"skipped":1047,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:09:55.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:09:55.978: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7464 I0226 00:09:56.002425 9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7464, replica count: 1 I0226 00:09:57.054454 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:09:58.055034 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:09:59.055565 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:10:00.056405 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:10:01.057089 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:10:02.057823 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:10:03.058376 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:10:04.059079 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:10:05.059482 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:10:06.059894 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 26 00:10:06.196: INFO: Created: latency-svc-45swl Feb 26 00:10:06.210: INFO: Got endpoints: latency-svc-45swl [50.039437ms] Feb 26 00:10:06.253: INFO: Created: latency-svc-fmclj Feb 26 00:10:06.315: INFO: Got endpoints: latency-svc-fmclj [104.498187ms] Feb 26 00:10:06.340: INFO: Created: latency-svc-w8rn4 Feb 26 00:10:06.372: INFO: Got endpoints: latency-svc-w8rn4 [162.191188ms] Feb 26 00:10:06.400: INFO: Created: latency-svc-v8ht5 Feb 26 00:10:06.478: INFO: Got endpoints: latency-svc-v8ht5 [266.039563ms] Feb 26 00:10:06.500: INFO: Created: latency-svc-6p7f4 Feb 26 00:10:06.518: INFO: Got endpoints: latency-svc-6p7f4 [308.147832ms] Feb 26 00:10:06.540: INFO: Created: latency-svc-gkx6b Feb 26 00:10:06.541: INFO: Got endpoints: latency-svc-gkx6b [329.582562ms] Feb 26 00:10:06.651: INFO: Created: latency-svc-zlrh5 Feb 26 00:10:06.691: INFO: Got endpoints: latency-svc-zlrh5 [479.238065ms] Feb 26 00:10:06.695: INFO: Created: latency-svc-mcnvg Feb 26 00:10:06.707: INFO: Got endpoints: latency-svc-mcnvg [496.669417ms] Feb 26 00:10:06.845: INFO: Created: latency-svc-rkqjc Feb 26 00:10:06.866: INFO: Got endpoints: latency-svc-rkqjc [653.648482ms] Feb 26 00:10:06.879: INFO: Created: latency-svc-qtd7r Feb 26 00:10:06.891: INFO: Got endpoints: latency-svc-qtd7r [679.20979ms] Feb 26 00:10:06.939: INFO: Created: latency-svc-zckdg Feb 26 00:10:06.991: INFO: Got endpoints: latency-svc-zckdg [780.470201ms] Feb 26 00:10:07.003: INFO: Created: latency-svc-gwjsv Feb 26 00:10:07.006: INFO: Got endpoints: latency-svc-gwjsv [793.121034ms] Feb 26 00:10:07.067: INFO: Created: latency-svc-brhbp Feb 26 00:10:07.068: INFO: Got endpoints: latency-svc-brhbp [856.936512ms] Feb 26 00:10:07.179: INFO: Created: latency-svc-mls97 Feb 26 00:10:07.194: INFO: Got endpoints: latency-svc-mls97 [982.027324ms] Feb 26 00:10:07.227: INFO: Created: latency-svc-69dgx Feb 26 00:10:07.236: INFO: Got endpoints: latency-svc-69dgx [1.025561272s] Feb 26 00:10:07.260: INFO: Created: latency-svc-4tgwv Feb 26 00:10:07.275: INFO: Got endpoints: latency-svc-4tgwv [1.063359691s] Feb 26 00:10:07.342: INFO: Created: latency-svc-z5grn Feb 26 00:10:07.350: INFO: Got endpoints: latency-svc-z5grn [1.035114593s] Feb 26 00:10:07.377: INFO: Created: latency-svc-r6vrc Feb 26 00:10:07.385: INFO: Got endpoints: latency-svc-r6vrc [1.012753818s] Feb 26 00:10:07.411: INFO: Created: latency-svc-4w8mm Feb 26 00:10:07.423: INFO: Got endpoints: latency-svc-4w8mm [945.429435ms] Feb 26 00:10:07.536: INFO: Created: latency-svc-ntx4q Feb 26 00:10:07.570: INFO: Got endpoints: latency-svc-ntx4q [1.050878572s] Feb 26 00:10:07.571: INFO: Created: latency-svc-v96kg Feb 26 00:10:07.587: INFO: Got endpoints: latency-svc-v96kg [1.045818182s] Feb 26 00:10:07.593: INFO: Created: latency-svc-mtpkm Feb 26 00:10:07.618: INFO: Created: latency-svc-hgvm2 Feb 26 00:10:07.620: INFO: Got endpoints: latency-svc-mtpkm [927.757146ms] Feb 26 00:10:07.674: INFO: Got endpoints: latency-svc-hgvm2 [966.128699ms] Feb 26 00:10:07.727: INFO: Created: latency-svc-q47ww Feb 26 00:10:07.744: INFO: Got endpoints: latency-svc-q47ww [877.75298ms] Feb 26 00:10:07.907: INFO: Created: latency-svc-h42xn Feb 26 00:10:07.964: INFO: Created: latency-svc-sqqlx Feb 26 00:10:07.964: INFO: Got endpoints: latency-svc-h42xn [1.072759232s] Feb 26 00:10:07.975: INFO: Got endpoints: latency-svc-sqqlx [983.889479ms] Feb 26 00:10:08.064: INFO: Created: latency-svc-hmcxc Feb 26 00:10:08.072: INFO: Got endpoints: latency-svc-hmcxc [1.066162887s] Feb 26 00:10:08.090: INFO: Created: latency-svc-4hv2g Feb 26 00:10:08.096: INFO: Got endpoints: latency-svc-4hv2g [1.027590478s] Feb 26 00:10:08.127: INFO: Created: latency-svc-2xwvv Feb 26 00:10:08.225: INFO: Got endpoints: latency-svc-2xwvv [1.031074471s] Feb 26 00:10:08.241: INFO: Created: latency-svc-lpcn9 Feb 26 00:10:08.262: INFO: Got endpoints: latency-svc-lpcn9 [1.026480366s] Feb 26 00:10:08.264: INFO: Created: latency-svc-mmtpd Feb 26 00:10:08.269: INFO: Got endpoints: latency-svc-mmtpd [993.341553ms] Feb 26 00:10:08.293: INFO: Created: latency-svc-fhc8l Feb 26 00:10:08.314: INFO: Got endpoints: latency-svc-fhc8l [962.98564ms] Feb 26 00:10:08.377: INFO: Created: latency-svc-sdwmb Feb 26 00:10:08.383: INFO: Got endpoints: latency-svc-sdwmb [997.988477ms] Feb 26 00:10:08.409: INFO: Created: latency-svc-5jw2v Feb 26 00:10:08.423: INFO: Got endpoints: latency-svc-5jw2v [999.36652ms] Feb 26 00:10:08.443: INFO: Created: latency-svc-29xvw Feb 26 00:10:08.455: INFO: Got endpoints: latency-svc-29xvw [885.217932ms] Feb 26 00:10:08.558: INFO: Created: latency-svc-rb5qj Feb 26 00:10:08.589: INFO: Created: latency-svc-4k9dm Feb 26 00:10:08.593: INFO: Got endpoints: latency-svc-rb5qj [1.005733946s] Feb 26 00:10:08.596: INFO: Got endpoints: latency-svc-4k9dm [976.30257ms] Feb 26 00:10:08.616: INFO: Created: latency-svc-nc68d Feb 26 00:10:08.630: INFO: Got endpoints: latency-svc-nc68d [955.655432ms] Feb 26 00:10:08.642: INFO: Created: latency-svc-glzvg Feb 26 00:10:08.650: INFO: Got endpoints: latency-svc-glzvg [905.502592ms] Feb 26 00:10:08.721: INFO: Created: latency-svc-dw5jz Feb 26 00:10:08.730: INFO: Got endpoints: latency-svc-dw5jz [765.705536ms] Feb 26 00:10:08.752: INFO: Created: latency-svc-7fbg7 Feb 26 00:10:08.791: INFO: Got endpoints: latency-svc-7fbg7 [816.177255ms] Feb 26 00:10:08.862: INFO: Created: latency-svc-jkffp Feb 26 00:10:08.924: INFO: Got endpoints: latency-svc-jkffp [851.753555ms] Feb 26 00:10:08.925: INFO: Created: latency-svc-jnq5l Feb 26 00:10:08.931: INFO: Got endpoints: latency-svc-jnq5l [834.775224ms] Feb 26 00:10:09.041: INFO: Created: latency-svc-7hbbh Feb 26 00:10:09.069: INFO: Got endpoints: latency-svc-7hbbh [844.052499ms] Feb 26 00:10:09.079: INFO: Created: latency-svc-td96k Feb 26 00:10:09.079: INFO: Got endpoints: latency-svc-td96k [816.842728ms] Feb 26 00:10:09.108: INFO: Created: latency-svc-dlhhq Feb 26 00:10:09.113: INFO: Got endpoints: latency-svc-dlhhq [844.373123ms] Feb 26 00:10:09.138: INFO: Created: latency-svc-ktrp4 Feb 26 00:10:09.178: INFO: Got endpoints: latency-svc-ktrp4 [864.105659ms] Feb 26 00:10:09.209: INFO: Created: latency-svc-ld4qv Feb 26 00:10:09.215: INFO: Got endpoints: latency-svc-ld4qv [831.651086ms] Feb 26 00:10:09.245: INFO: Created: latency-svc-lckpd Feb 26 00:10:09.251: INFO: Got endpoints: latency-svc-lckpd [827.598555ms] Feb 26 00:10:09.280: INFO: Created: latency-svc-q25wq Feb 26 00:10:09.369: INFO: Got endpoints: latency-svc-q25wq [913.53048ms] Feb 26 00:10:09.383: INFO: Created: latency-svc-j7fp9 Feb 26 00:10:09.387: INFO: Got endpoints: latency-svc-j7fp9 [793.649954ms] Feb 26 00:10:09.414: INFO: Created: latency-svc-64jsw Feb 26 00:10:09.426: INFO: Got endpoints: latency-svc-64jsw [829.388142ms] Feb 26 00:10:09.573: INFO: Created: latency-svc-z575c Feb 26 00:10:09.598: INFO: Got endpoints: latency-svc-z575c [967.939661ms] Feb 26 00:10:09.628: INFO: Created: latency-svc-ps5sl Feb 26 00:10:09.664: INFO: Got endpoints: latency-svc-ps5sl [1.014215455s] Feb 26 00:10:09.667: INFO: Created: latency-svc-5xfts Feb 26 00:10:09.671: INFO: Got endpoints: latency-svc-5xfts [940.769505ms] Feb 26 00:10:09.772: INFO: Created: latency-svc-5rt5c Feb 26 00:10:09.775: INFO: Got endpoints: latency-svc-5rt5c [983.271708ms] Feb 26 00:10:09.802: INFO: Created: latency-svc-jklh7 Feb 26 00:10:09.816: INFO: Got endpoints: latency-svc-jklh7 [891.442175ms] Feb 26 00:10:09.853: INFO: Created: latency-svc-9gffd Feb 26 00:10:09.982: INFO: Got endpoints: latency-svc-9gffd [1.050344s] Feb 26 00:10:10.003: INFO: Created: latency-svc-tdf9g Feb 26 00:10:10.007: INFO: Got endpoints: latency-svc-tdf9g [937.655771ms] Feb 26 00:10:10.040: INFO: Created: latency-svc-pk5v6 Feb 26 00:10:10.071: INFO: Got endpoints: latency-svc-pk5v6 [991.426468ms] Feb 26 00:10:10.073: INFO: Created: latency-svc-xqrfq Feb 26 00:10:10.123: INFO: Got endpoints: latency-svc-xqrfq [1.009078086s] Feb 26 00:10:10.167: INFO: Created: latency-svc-v67pm Feb 26 00:10:10.171: INFO: Got endpoints: latency-svc-v67pm [992.732091ms] Feb 26 00:10:10.196: INFO: Created: latency-svc-b75sc Feb 26 00:10:10.217: INFO: Got endpoints: latency-svc-b75sc [1.001905172s] Feb 26 00:10:10.274: INFO: Created: latency-svc-bzc77 Feb 26 00:10:10.285: INFO: Got endpoints: latency-svc-bzc77 [1.03410713s] Feb 26 00:10:10.321: INFO: Created: latency-svc-557hk Feb 26 00:10:10.349: INFO: Got endpoints: latency-svc-557hk [979.667146ms] Feb 26 00:10:10.356: INFO: Created: latency-svc-wsppf Feb 26 00:10:10.428: INFO: Got endpoints: latency-svc-wsppf [1.041569981s] Feb 26 00:10:10.429: INFO: Created: latency-svc-2dc4g Feb 26 00:10:10.438: INFO: Got endpoints: latency-svc-2dc4g [1.012587802s] Feb 26 00:10:10.488: INFO: Created: latency-svc-6727q Feb 26 00:10:10.515: INFO: Created: latency-svc-74f5n Feb 26 00:10:10.515: INFO: Got endpoints: latency-svc-6727q [917.260187ms] Feb 26 00:10:10.521: INFO: Got endpoints: latency-svc-74f5n [856.28388ms] Feb 26 00:10:10.596: INFO: Created: latency-svc-7n5ml Feb 26 00:10:10.599: INFO: Got endpoints: latency-svc-7n5ml [927.857771ms] Feb 26 00:10:10.616: INFO: Created: latency-svc-mzg96 Feb 26 00:10:10.653: INFO: Got endpoints: latency-svc-mzg96 [878.547462ms] Feb 26 00:10:10.661: INFO: Created: latency-svc-fhctw Feb 26 00:10:10.670: INFO: Got endpoints: latency-svc-fhctw [854.178685ms] Feb 26 00:10:10.753: INFO: Created: latency-svc-s7q7g Feb 26 00:10:10.777: INFO: Got endpoints: latency-svc-s7q7g [795.012312ms] Feb 26 00:10:10.781: INFO: Created: latency-svc-2nsfc Feb 26 00:10:10.798: INFO: Got endpoints: latency-svc-2nsfc [791.067797ms] Feb 26 00:10:10.832: INFO: Created: latency-svc-n295w Feb 26 00:10:10.837: INFO: Got endpoints: latency-svc-n295w [766.255947ms] Feb 26 00:10:10.944: INFO: Created: latency-svc-f86kl Feb 26 00:10:10.968: INFO: Got endpoints: latency-svc-f86kl [844.49183ms] Feb 26 00:10:11.017: INFO: Created: latency-svc-vqfsj Feb 26 00:10:11.025: INFO: Got endpoints: latency-svc-vqfsj [853.731612ms] Feb 26 00:10:11.078: INFO: Created: latency-svc-2xm8k Feb 26 00:10:11.112: INFO: Got endpoints: latency-svc-2xm8k [894.354234ms] Feb 26 00:10:11.113: INFO: Created: latency-svc-5lwhl Feb 26 00:10:11.123: INFO: Got endpoints: latency-svc-5lwhl [837.316905ms] Feb 26 00:10:11.155: INFO: Created: latency-svc-ql8zr Feb 26 00:10:11.163: INFO: Got endpoints: latency-svc-ql8zr [814.478976ms] Feb 26 00:10:11.233: INFO: Created: latency-svc-mmjfh Feb 26 00:10:11.247: INFO: Got endpoints: latency-svc-mmjfh [818.10556ms] Feb 26 00:10:11.268: INFO: Created: latency-svc-mrtqx Feb 26 00:10:11.272: INFO: Got endpoints: latency-svc-mrtqx [833.64516ms] Feb 26 00:10:11.292: INFO: Created: latency-svc-zvsl9 Feb 26 00:10:11.296: INFO: Got endpoints: latency-svc-zvsl9 [780.789445ms] Feb 26 00:10:11.432: INFO: Created: latency-svc-k7lm4 Feb 26 00:10:11.436: INFO: Got endpoints: latency-svc-k7lm4 [915.454217ms] Feb 26 00:10:11.485: INFO: Created: latency-svc-s245l Feb 26 00:10:11.494: INFO: Got endpoints: latency-svc-s245l [895.209199ms] Feb 26 00:10:11.597: INFO: Created: latency-svc-wbqbv Feb 26 00:10:11.621: INFO: Got endpoints: latency-svc-wbqbv [967.474701ms] Feb 26 00:10:11.626: INFO: Created: latency-svc-tdw8k Feb 26 00:10:11.630: INFO: Got endpoints: latency-svc-tdw8k [959.103813ms] Feb 26 00:10:11.657: INFO: Created: latency-svc-rs6zz Feb 26 00:10:11.661: INFO: Got endpoints: latency-svc-rs6zz [883.635196ms] Feb 26 00:10:11.707: INFO: Created: latency-svc-fxngb Feb 26 00:10:11.805: INFO: Got endpoints: latency-svc-fxngb [1.006199273s] Feb 26 00:10:11.821: INFO: Created: latency-svc-wh6rk Feb 26 00:10:11.828: INFO: Got endpoints: latency-svc-wh6rk [990.395207ms] Feb 26 00:10:11.889: INFO: Created: latency-svc-s7nwv Feb 26 00:10:11.902: INFO: Got endpoints: latency-svc-s7nwv [934.569813ms] Feb 26 00:10:12.006: INFO: Created: latency-svc-gz4dr Feb 26 00:10:12.021: INFO: Got endpoints: latency-svc-gz4dr [995.682974ms] Feb 26 00:10:12.043: INFO: Created: latency-svc-szprr Feb 26 00:10:12.048: INFO: Got endpoints: latency-svc-szprr [935.907534ms] Feb 26 00:10:12.066: INFO: Created: latency-svc-cgxbn Feb 26 00:10:12.067: INFO: Got endpoints: latency-svc-cgxbn [944.138421ms] Feb 26 00:10:12.095: INFO: Created: latency-svc-vsplx Feb 26 00:10:12.103: INFO: Got endpoints: latency-svc-vsplx [940.015838ms] Feb 26 00:10:12.206: INFO: Created: latency-svc-4bcql Feb 26 00:10:12.210: INFO: Got endpoints: latency-svc-4bcql [963.663724ms] Feb 26 00:10:12.238: INFO: Created: latency-svc-md7qt Feb 26 00:10:12.245: INFO: Got endpoints: latency-svc-md7qt [972.99425ms] Feb 26 00:10:12.342: INFO: Created: latency-svc-pf87w Feb 26 00:10:12.343: INFO: Got endpoints: latency-svc-pf87w [1.046157063s] Feb 26 00:10:12.397: INFO: Created: latency-svc-72lww Feb 26 00:10:12.397: INFO: Got endpoints: latency-svc-72lww [960.944873ms] Feb 26 00:10:12.419: INFO: Created: latency-svc-4mk2d Feb 26 00:10:12.427: INFO: Got endpoints: latency-svc-4mk2d [932.589911ms] Feb 26 00:10:12.546: INFO: Created: latency-svc-b8q2p Feb 26 00:10:12.561: INFO: Got endpoints: latency-svc-b8q2p [940.110713ms] Feb 26 00:10:12.586: INFO: Created: latency-svc-28rgp Feb 26 00:10:12.594: INFO: Got endpoints: latency-svc-28rgp [964.423678ms] Feb 26 00:10:12.661: INFO: Created: latency-svc-nxlcp Feb 26 00:10:12.693: INFO: Got endpoints: latency-svc-nxlcp [1.032384816s] Feb 26 00:10:12.696: INFO: Created: latency-svc-h99dj Feb 26 00:10:12.698: INFO: Got endpoints: latency-svc-h99dj [892.915617ms] Feb 26 00:10:12.719: INFO: Created: latency-svc-zqb92 Feb 26 00:10:12.735: INFO: Got endpoints: latency-svc-zqb92 [906.697525ms] Feb 26 00:10:12.754: INFO: Created: latency-svc-xnqzk Feb 26 00:10:12.810: INFO: Got endpoints: latency-svc-xnqzk [907.770099ms] Feb 26 00:10:12.827: INFO: Created: latency-svc-668zm Feb 26 00:10:12.837: INFO: Got endpoints: latency-svc-668zm [816.011892ms] Feb 26 00:10:12.873: INFO: Created: latency-svc-gjtz2 Feb 26 00:10:12.890: INFO: Got endpoints: latency-svc-gjtz2 [841.845756ms] Feb 26 00:10:12.906: INFO: Created: latency-svc-ct6mm Feb 26 00:10:12.967: INFO: Got endpoints: latency-svc-ct6mm [900.206383ms] Feb 26 00:10:13.012: INFO: Created: latency-svc-lfm2n Feb 26 00:10:13.021: INFO: Got endpoints: latency-svc-lfm2n [917.290549ms] Feb 26 00:10:13.141: INFO: Created: latency-svc-89gmh Feb 26 00:10:13.169: INFO: Got endpoints: latency-svc-89gmh [958.722469ms] Feb 26 00:10:13.172: INFO: Created: latency-svc-rdw98 Feb 26 00:10:13.194: INFO: Got endpoints: latency-svc-rdw98 [948.183615ms] Feb 26 00:10:13.219: INFO: Created: latency-svc-6gntw Feb 26 00:10:13.224: INFO: Got endpoints: latency-svc-6gntw [880.843595ms] Feb 26 00:10:13.292: INFO: Created: latency-svc-cwgtm Feb 26 00:10:13.292: INFO: Got endpoints: latency-svc-cwgtm [895.048327ms] Feb 26 00:10:13.325: INFO: Created: latency-svc-cnqcx Feb 26 00:10:13.327: INFO: Got endpoints: latency-svc-cnqcx [899.366708ms] Feb 26 00:10:13.358: INFO: Created: latency-svc-jzfjn Feb 26 00:10:13.375: INFO: Got endpoints: latency-svc-jzfjn [812.977658ms] Feb 26 00:10:13.377: INFO: Created: latency-svc-529h4 Feb 26 00:10:13.440: INFO: Got endpoints: latency-svc-529h4 [845.212188ms] Feb 26 00:10:13.443: INFO: Created: latency-svc-8pg2q Feb 26 00:10:13.461: INFO: Got endpoints: latency-svc-8pg2q [767.665002ms] Feb 26 00:10:13.508: INFO: Created: latency-svc-7btpb Feb 26 00:10:13.510: INFO: Got endpoints: latency-svc-7btpb [812.553964ms] Feb 26 00:10:13.540: INFO: Created: latency-svc-98blj Feb 26 00:10:13.625: INFO: Got endpoints: latency-svc-98blj [889.874958ms] Feb 26 00:10:13.649: INFO: Created: latency-svc-bmfxd Feb 26 00:10:13.654: INFO: Got endpoints: latency-svc-bmfxd [843.325401ms] Feb 26 00:10:13.688: INFO: Created: latency-svc-v2x2p Feb 26 00:10:13.698: INFO: Got endpoints: latency-svc-v2x2p [860.799059ms] Feb 26 00:10:13.734: INFO: Created: latency-svc-vdsz9 Feb 26 00:10:13.784: INFO: Got endpoints: latency-svc-vdsz9 [893.848292ms] Feb 26 00:10:13.804: INFO: Created: latency-svc-bzl6z Feb 26 00:10:13.806: INFO: Got endpoints: latency-svc-bzl6z [838.604544ms] Feb 26 00:10:13.859: INFO: Created: latency-svc-r7kpc Feb 26 00:10:13.866: INFO: Got endpoints: latency-svc-r7kpc [845.10766ms] Feb 26 00:10:13.935: INFO: Created: latency-svc-6h247 Feb 26 00:10:13.965: INFO: Created: latency-svc-xgvsz Feb 26 00:10:13.965: INFO: Got endpoints: latency-svc-6h247 [795.076953ms] Feb 26 00:10:13.991: INFO: Got endpoints: latency-svc-xgvsz [797.797855ms] Feb 26 00:10:14.028: INFO: Created: latency-svc-f6t4z Feb 26 00:10:14.082: INFO: Got endpoints: latency-svc-f6t4z [857.852827ms] Feb 26 00:10:14.097: INFO: Created: latency-svc-rprhc Feb 26 00:10:14.118: INFO: Got endpoints: latency-svc-rprhc [825.07641ms] Feb 26 00:10:14.121: INFO: Created: latency-svc-jfd4w Feb 26 00:10:14.135: INFO: Got endpoints: latency-svc-jfd4w [808.59727ms] Feb 26 00:10:14.159: INFO: Created: latency-svc-z95vk Feb 26 00:10:14.163: INFO: Got endpoints: latency-svc-z95vk [787.492444ms] Feb 26 00:10:14.246: INFO: Created: latency-svc-txkb2 Feb 26 00:10:14.280: INFO: Created: latency-svc-dc2dk Feb 26 00:10:14.282: INFO: Got endpoints: latency-svc-txkb2 [842.326043ms] Feb 26 00:10:14.311: INFO: Got endpoints: latency-svc-dc2dk [849.722874ms] Feb 26 00:10:14.330: INFO: Created: latency-svc-vzlkz Feb 26 00:10:14.332: INFO: Got endpoints: latency-svc-vzlkz [821.487154ms] Feb 26 00:10:14.423: INFO: Created: latency-svc-xg4t2 Feb 26 00:10:14.448: INFO: Created: latency-svc-z7gvv Feb 26 00:10:14.450: INFO: Got endpoints: latency-svc-xg4t2 [825.307607ms] Feb 26 00:10:14.470: INFO: Got endpoints: latency-svc-z7gvv [815.903691ms] Feb 26 00:10:14.581: INFO: Created: latency-svc-tq949 Feb 26 00:10:14.581: INFO: Got endpoints: latency-svc-tq949 [883.282516ms] Feb 26 00:10:14.622: INFO: Created: latency-svc-ssh7g Feb 26 00:10:14.654: INFO: Got endpoints: latency-svc-ssh7g [869.466229ms] Feb 26 00:10:14.724: INFO: Created: latency-svc-hbbnl Feb 26 00:10:14.733: INFO: Got endpoints: latency-svc-hbbnl [926.681919ms] Feb 26 00:10:14.765: INFO: Created: latency-svc-2pk72 Feb 26 00:10:14.777: INFO: Got endpoints: latency-svc-2pk72 [910.557124ms] Feb 26 00:10:14.801: INFO: Created: latency-svc-hhvwm Feb 26 00:10:14.813: INFO: Got endpoints: latency-svc-hhvwm [847.792992ms] Feb 26 00:10:14.895: INFO: Created: latency-svc-rhn9t Feb 26 00:10:14.895: INFO: Got endpoints: latency-svc-rhn9t [903.712256ms] Feb 26 00:10:14.925: INFO: Created: latency-svc-l62qv Feb 26 00:10:14.928: INFO: Got endpoints: latency-svc-l62qv [845.867978ms] Feb 26 00:10:15.070: INFO: Created: latency-svc-84pgr Feb 26 00:10:15.121: INFO: Got endpoints: latency-svc-84pgr [1.00318952s] Feb 26 00:10:15.123: INFO: Created: latency-svc-bzcgc Feb 26 00:10:15.132: INFO: Got endpoints: latency-svc-bzcgc [996.616497ms] Feb 26 00:10:15.163: INFO: Created: latency-svc-s9vbm Feb 26 00:10:15.168: INFO: Got endpoints: latency-svc-s9vbm [1.004922012s] Feb 26 00:10:15.219: INFO: Created: latency-svc-pdgj8 Feb 26 00:10:15.226: INFO: Got endpoints: latency-svc-pdgj8 [943.99294ms] Feb 26 00:10:15.254: INFO: Created: latency-svc-8rvfc Feb 26 00:10:15.263: INFO: Got endpoints: latency-svc-8rvfc [951.605698ms] Feb 26 00:10:15.288: INFO: Created: latency-svc-nk6v2 Feb 26 00:10:15.289: INFO: Got endpoints: latency-svc-nk6v2 [957.005944ms] Feb 26 00:10:15.312: INFO: Created: latency-svc-z99v9 Feb 26 00:10:15.316: INFO: Got endpoints: latency-svc-z99v9 [865.248256ms] Feb 26 00:10:15.365: INFO: Created: latency-svc-6mxkf Feb 26 00:10:15.366: INFO: Got endpoints: latency-svc-6mxkf [895.703383ms] Feb 26 00:10:15.403: INFO: Created: latency-svc-t75gq Feb 26 00:10:15.419: INFO: Got endpoints: latency-svc-t75gq [837.370319ms] Feb 26 00:10:15.441: INFO: Created: latency-svc-zqffw Feb 26 00:10:15.443: INFO: Got endpoints: latency-svc-zqffw [789.040592ms] Feb 26 00:10:15.514: INFO: Created: latency-svc-psxrx Feb 26 00:10:15.515: INFO: Got endpoints: latency-svc-psxrx [781.654806ms] Feb 26 00:10:15.594: INFO: Created: latency-svc-4msrl Feb 26 00:10:15.601: INFO: Got endpoints: latency-svc-4msrl [823.328742ms] Feb 26 00:10:15.679: INFO: Created: latency-svc-2jh7d Feb 26 00:10:15.680: INFO: Got endpoints: latency-svc-2jh7d [867.65937ms] Feb 26 00:10:15.711: INFO: Created: latency-svc-x6w9p Feb 26 00:10:15.722: INFO: Got endpoints: latency-svc-x6w9p [826.667212ms] Feb 26 00:10:15.739: INFO: Created: latency-svc-pwtt2 Feb 26 00:10:15.813: INFO: Created: latency-svc-qhz4c Feb 26 00:10:15.814: INFO: Got endpoints: latency-svc-pwtt2 [885.85636ms] Feb 26 00:10:15.818: INFO: Got endpoints: latency-svc-qhz4c [697.151905ms] Feb 26 00:10:15.843: INFO: Created: latency-svc-tfr8l Feb 26 00:10:15.862: INFO: Got endpoints: latency-svc-tfr8l [729.559161ms] Feb 26 00:10:15.881: INFO: Created: latency-svc-bcq9q Feb 26 00:10:15.890: INFO: Got endpoints: latency-svc-bcq9q [722.276768ms] Feb 26 00:10:15.962: INFO: Created: latency-svc-t5scl Feb 26 00:10:15.992: INFO: Got endpoints: latency-svc-t5scl [765.514201ms] Feb 26 00:10:15.995: INFO: Created: latency-svc-2wtdg Feb 26 00:10:16.000: INFO: Got endpoints: latency-svc-2wtdg [736.915353ms] Feb 26 00:10:16.024: INFO: Created: latency-svc-d85fh Feb 26 00:10:16.030: INFO: Got endpoints: latency-svc-d85fh [740.188314ms] Feb 26 00:10:16.146: INFO: Created: latency-svc-klpj8 Feb 26 00:10:16.147: INFO: Got endpoints: latency-svc-klpj8 [831.000372ms] Feb 26 00:10:16.187: INFO: Created: latency-svc-425jx Feb 26 00:10:16.196: INFO: Got endpoints: latency-svc-425jx [830.40564ms] Feb 26 00:10:16.231: INFO: Created: latency-svc-kfs2d Feb 26 00:10:16.239: INFO: Got endpoints: latency-svc-kfs2d [819.571974ms] Feb 26 00:10:16.289: INFO: Created: latency-svc-xgddm Feb 26 00:10:16.298: INFO: Got endpoints: latency-svc-xgddm [855.090165ms] Feb 26 00:10:16.324: INFO: Created: latency-svc-f77fn Feb 26 00:10:16.329: INFO: Got endpoints: latency-svc-f77fn [813.657933ms] Feb 26 00:10:16.346: INFO: Created: latency-svc-2x9sk Feb 26 00:10:16.353: INFO: Got endpoints: latency-svc-2x9sk [751.638701ms] Feb 26 00:10:16.377: INFO: Created: latency-svc-jgns5 Feb 26 00:10:16.377: INFO: Got endpoints: latency-svc-jgns5 [696.325738ms] Feb 26 00:10:16.451: INFO: Created: latency-svc-gcz86 Feb 26 00:10:16.452: INFO: Got endpoints: latency-svc-gcz86 [729.729133ms] Feb 26 00:10:16.493: INFO: Created: latency-svc-plzmg Feb 26 00:10:16.511: INFO: Got endpoints: latency-svc-plzmg [697.003718ms] Feb 26 00:10:16.518: INFO: Created: latency-svc-vlq5b Feb 26 00:10:16.518: INFO: Got endpoints: latency-svc-vlq5b [699.698764ms] Feb 26 00:10:16.540: INFO: Created: latency-svc-f6hzh Feb 26 00:10:16.542: INFO: Got endpoints: latency-svc-f6hzh [679.298415ms] Feb 26 00:10:16.591: INFO: Created: latency-svc-nchlq Feb 26 00:10:16.594: INFO: Got endpoints: latency-svc-nchlq [703.561637ms] Feb 26 00:10:16.616: INFO: Created: latency-svc-cr9c4 Feb 26 00:10:16.622: INFO: Got endpoints: latency-svc-cr9c4 [629.220729ms] Feb 26 00:10:16.656: INFO: Created: latency-svc-tvf5k Feb 26 00:10:16.656: INFO: Got endpoints: latency-svc-tvf5k [656.245276ms] Feb 26 00:10:16.710: INFO: Created: latency-svc-vmswk Feb 26 00:10:16.719: INFO: Got endpoints: latency-svc-vmswk [689.041651ms] Feb 26 00:10:16.737: INFO: Created: latency-svc-566dc Feb 26 00:10:16.738: INFO: Got endpoints: latency-svc-566dc [591.167041ms] Feb 26 00:10:16.754: INFO: Created: latency-svc-k2zcv Feb 26 00:10:16.769: INFO: Got endpoints: latency-svc-k2zcv [573.17143ms] Feb 26 00:10:16.888: INFO: Created: latency-svc-wzk9s Feb 26 00:10:16.889: INFO: Got endpoints: latency-svc-wzk9s [650.416998ms] Feb 26 00:10:16.943: INFO: Created: latency-svc-g5bmb Feb 26 00:10:16.972: INFO: Got endpoints: latency-svc-g5bmb [673.906066ms] Feb 26 00:10:17.093: INFO: Created: latency-svc-b8jgd Feb 26 00:10:17.126: INFO: Got endpoints: latency-svc-b8jgd [797.013285ms] Feb 26 00:10:17.133: INFO: Created: latency-svc-587gm Feb 26 00:10:17.157: INFO: Got endpoints: latency-svc-587gm [804.393897ms] Feb 26 00:10:17.163: INFO: Created: latency-svc-2psv2 Feb 26 00:10:17.187: INFO: Got endpoints: latency-svc-2psv2 [810.400435ms] Feb 26 00:10:17.229: INFO: Created: latency-svc-hh4pf Feb 26 00:10:17.253: INFO: Got endpoints: latency-svc-hh4pf [800.959917ms] Feb 26 00:10:17.293: INFO: Created: latency-svc-p2mlq Feb 26 00:10:17.306: INFO: Got endpoints: latency-svc-p2mlq [794.756617ms] Feb 26 00:10:17.333: INFO: Created: latency-svc-tvnj6 Feb 26 00:10:17.369: INFO: Got endpoints: latency-svc-tvnj6 [851.267229ms] Feb 26 00:10:17.389: INFO: Created: latency-svc-5c2ps Feb 26 00:10:17.408: INFO: Got endpoints: latency-svc-5c2ps [866.296263ms] Feb 26 00:10:17.409: INFO: Created: latency-svc-wb6rz Feb 26 00:10:17.420: INFO: Got endpoints: latency-svc-wb6rz [825.89044ms] Feb 26 00:10:17.445: INFO: Created: latency-svc-6q578 Feb 26 00:10:17.523: INFO: Got endpoints: latency-svc-6q578 [901.334344ms] Feb 26 00:10:17.528: INFO: Created: latency-svc-jlkl4 Feb 26 00:10:17.564: INFO: Created: latency-svc-6lmv4 Feb 26 00:10:17.566: INFO: Got endpoints: latency-svc-jlkl4 [909.365864ms] Feb 26 00:10:17.570: INFO: Got endpoints: latency-svc-6lmv4 [851.116917ms] Feb 26 00:10:17.596: INFO: Created: latency-svc-7mvbs Feb 26 00:10:17.615: INFO: Got endpoints: latency-svc-7mvbs [876.611162ms] Feb 26 00:10:17.655: INFO: Created: latency-svc-sqr7k Feb 26 00:10:17.666: INFO: Got endpoints: latency-svc-sqr7k [896.586548ms] Feb 26 00:10:17.686: INFO: Created: latency-svc-ktcdg Feb 26 00:10:17.689: INFO: Got endpoints: latency-svc-ktcdg [799.565679ms] Feb 26 00:10:17.713: INFO: Created: latency-svc-qdp9l Feb 26 00:10:17.726: INFO: Got endpoints: latency-svc-qdp9l [753.089827ms] Feb 26 00:10:17.869: INFO: Created: latency-svc-9cj8t Feb 26 00:10:17.870: INFO: Got endpoints: latency-svc-9cj8t [744.272687ms] Feb 26 00:10:17.929: INFO: Created: latency-svc-86475 Feb 26 00:10:17.944: INFO: Got endpoints: latency-svc-86475 [786.662055ms] Feb 26 00:10:18.065: INFO: Created: latency-svc-9n54k Feb 26 00:10:18.076: INFO: Got endpoints: latency-svc-9n54k [888.427618ms] Feb 26 00:10:18.076: INFO: Latencies: [104.498187ms 162.191188ms 266.039563ms 308.147832ms 329.582562ms 479.238065ms 496.669417ms 573.17143ms 591.167041ms 629.220729ms 650.416998ms 653.648482ms 656.245276ms 673.906066ms 679.20979ms 679.298415ms 689.041651ms 696.325738ms 697.003718ms 697.151905ms 699.698764ms 703.561637ms 722.276768ms 729.559161ms 729.729133ms 736.915353ms 740.188314ms 744.272687ms 751.638701ms 753.089827ms 765.514201ms 765.705536ms 766.255947ms 767.665002ms 780.470201ms 780.789445ms 781.654806ms 786.662055ms 787.492444ms 789.040592ms 791.067797ms 793.121034ms 793.649954ms 794.756617ms 795.012312ms 795.076953ms 797.013285ms 797.797855ms 799.565679ms 800.959917ms 804.393897ms 808.59727ms 810.400435ms 812.553964ms 812.977658ms 813.657933ms 814.478976ms 815.903691ms 816.011892ms 816.177255ms 816.842728ms 818.10556ms 819.571974ms 821.487154ms 823.328742ms 825.07641ms 825.307607ms 825.89044ms 826.667212ms 827.598555ms 829.388142ms 830.40564ms 831.000372ms 831.651086ms 833.64516ms 834.775224ms 837.316905ms 837.370319ms 838.604544ms 841.845756ms 842.326043ms 843.325401ms 844.052499ms 844.373123ms 844.49183ms 845.10766ms 845.212188ms 845.867978ms 847.792992ms 849.722874ms 851.116917ms 851.267229ms 851.753555ms 853.731612ms 854.178685ms 855.090165ms 856.28388ms 856.936512ms 857.852827ms 860.799059ms 864.105659ms 865.248256ms 866.296263ms 867.65937ms 869.466229ms 876.611162ms 877.75298ms 878.547462ms 880.843595ms 883.282516ms 883.635196ms 885.217932ms 885.85636ms 888.427618ms 889.874958ms 891.442175ms 892.915617ms 893.848292ms 894.354234ms 895.048327ms 895.209199ms 895.703383ms 896.586548ms 899.366708ms 900.206383ms 901.334344ms 903.712256ms 905.502592ms 906.697525ms 907.770099ms 909.365864ms 910.557124ms 913.53048ms 915.454217ms 917.260187ms 917.290549ms 926.681919ms 927.757146ms 927.857771ms 932.589911ms 934.569813ms 935.907534ms 937.655771ms 940.015838ms 940.110713ms 940.769505ms 943.99294ms 944.138421ms 945.429435ms 948.183615ms 951.605698ms 955.655432ms 957.005944ms 958.722469ms 959.103813ms 960.944873ms 962.98564ms 963.663724ms 964.423678ms 966.128699ms 967.474701ms 967.939661ms 972.99425ms 976.30257ms 979.667146ms 982.027324ms 983.271708ms 983.889479ms 990.395207ms 991.426468ms 992.732091ms 993.341553ms 995.682974ms 996.616497ms 997.988477ms 999.36652ms 1.001905172s 1.00318952s 1.004922012s 1.005733946s 1.006199273s 1.009078086s 1.012587802s 1.012753818s 1.014215455s 1.025561272s 1.026480366s 1.027590478s 1.031074471s 1.032384816s 1.03410713s 1.035114593s 1.041569981s 1.045818182s 1.046157063s 1.050344s 1.050878572s 1.063359691s 1.066162887s 1.072759232s] Feb 26 00:10:18.076: INFO: 50 %ile: 864.105659ms Feb 26 00:10:18.076: INFO: 90 %ile: 1.006199273s Feb 26 00:10:18.076: INFO: 99 %ile: 1.066162887s Feb 26 00:10:18.076: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:10:18.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7464" for this suite. • [SLOW TEST:22.296 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":280,"completed":59,"skipped":1073,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:10:18.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-4477f584-d0ca-4265-b4b6-bbef3bf58b14 STEP: Creating a pod to test consume configMaps Feb 26 00:10:18.282: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9" in namespace "projected-5827" to be "success or failure" Feb 26 00:10:18.288: INFO: Pod "pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057128ms Feb 26 00:10:20.295: INFO: Pod "pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013547494s Feb 26 00:10:22.301: INFO: Pod "pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019211867s Feb 26 00:10:24.796: INFO: Pod "pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.513637309s Feb 26 00:10:26.812: INFO: Pod "pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.529917774s STEP: Saw pod success Feb 26 00:10:26.812: INFO: Pod "pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9" satisfied condition "success or failure" Feb 26 00:10:26.830: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9 container projected-configmap-volume-test: STEP: delete the pod Feb 26 00:10:26.976: INFO: Waiting for pod pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9 to disappear Feb 26 00:10:26.982: INFO: Pod pod-projected-configmaps-dcfcb66e-4fd9-45df-b9a7-1019a30a34e9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:10:26.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5827" for this suite. • [SLOW TEST:8.898 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":60,"skipped":1075,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:10:27.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:10:44.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6536" for this suite. • [SLOW TEST:17.199 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":61,"skipped":1084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:10:44.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-a0418fb4-be22-4db9-ac59-f1df5b903add STEP: Creating a pod to test consume configMaps Feb 26 00:10:44.450: INFO: Waiting up to 5m0s for pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387" in namespace "configmap-5834" to be "success or failure" Feb 26 00:10:44.680: INFO: Pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387": Phase="Pending", Reason="", readiness=false. Elapsed: 229.342943ms Feb 26 00:10:46.693: INFO: Pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242858571s Feb 26 00:10:48.733: INFO: Pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282792192s Feb 26 00:10:50.760: INFO: Pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310324404s Feb 26 00:10:52.847: INFO: Pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396505103s Feb 26 00:10:54.859: INFO: Pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387": Phase="Pending", Reason="", readiness=false. Elapsed: 10.408666793s Feb 26 00:10:56.869: INFO: Pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.419047012s STEP: Saw pod success Feb 26 00:10:56.870: INFO: Pod "pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387" satisfied condition "success or failure" Feb 26 00:10:56.874: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387 container configmap-volume-test: STEP: delete the pod Feb 26 00:10:56.927: INFO: Waiting for pod pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387 to disappear Feb 26 00:10:56.935: INFO: Pod pod-configmaps-f06b4c8d-8315-4dfb-8c6b-673cdb26f387 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:10:56.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5834" for this suite. • [SLOW TEST:12.830 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":62,"skipped":1110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:10:57.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:11:08.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9566" for this suite. • [SLOW TEST:11.309 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":63,"skipped":1140,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:11:08.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 26 00:11:24.599: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 00:11:24.644: INFO: Pod pod-with-poststart-http-hook still exists Feb 26 00:11:26.645: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 00:11:26.655: INFO: Pod pod-with-poststart-http-hook still exists Feb 26 00:11:28.645: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 00:11:28.655: INFO: Pod pod-with-poststart-http-hook still exists Feb 26 00:11:30.645: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 00:11:30.654: INFO: Pod pod-with-poststart-http-hook still exists Feb 26 00:11:32.645: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 00:11:32.651: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:11:32.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2197" for this suite. • [SLOW TEST:24.318 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":64,"skipped":1140,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:11:32.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:11:37.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7387" for this suite. • [SLOW TEST:5.377 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":65,"skipped":1161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:11:38.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 26 00:12:00.241: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:00.241: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:00.295410 9 log.go:172] (0xc002e32210) (0xc001eac6e0) Create stream I0226 00:12:00.295645 9 log.go:172] (0xc002e32210) (0xc001eac6e0) Stream added, broadcasting: 1 I0226 00:12:00.298474 9 log.go:172] (0xc002e32210) Reply frame received for 1 I0226 00:12:00.298507 9 log.go:172] (0xc002e32210) (0xc001d901e0) Create stream I0226 00:12:00.298521 9 log.go:172] (0xc002e32210) (0xc001d901e0) Stream added, broadcasting: 3 I0226 00:12:00.300322 9 log.go:172] (0xc002e32210) Reply frame received for 3 I0226 00:12:00.300346 9 log.go:172] (0xc002e32210) (0xc001eac780) Create stream I0226 00:12:00.300356 9 log.go:172] (0xc002e32210) (0xc001eac780) Stream added, broadcasting: 5 I0226 00:12:00.301663 9 log.go:172] (0xc002e32210) Reply frame received for 5 I0226 00:12:00.388466 9 log.go:172] (0xc002e32210) Data frame received for 3 I0226 00:12:00.388922 9 log.go:172] (0xc001d901e0) (3) Data frame handling I0226 00:12:00.389061 9 log.go:172] (0xc001d901e0) (3) Data frame sent I0226 00:12:00.485794 9 log.go:172] (0xc002e32210) (0xc001d901e0) Stream removed, broadcasting: 3 I0226 00:12:00.486438 9 log.go:172] (0xc002e32210) Data frame received for 1 I0226 00:12:00.486473 9 log.go:172] (0xc001eac6e0) (1) Data frame handling I0226 00:12:00.486522 9 log.go:172] (0xc001eac6e0) (1) Data frame sent I0226 00:12:00.486537 9 log.go:172] (0xc002e32210) (0xc001eac6e0) Stream removed, broadcasting: 1 I0226 00:12:00.486929 9 log.go:172] (0xc002e32210) (0xc001eac780) Stream removed, broadcasting: 5 I0226 00:12:00.487008 9 log.go:172] (0xc002e32210) (0xc001eac6e0) Stream removed, broadcasting: 1 I0226 00:12:00.487019 9 log.go:172] (0xc002e32210) (0xc001d901e0) Stream removed, broadcasting: 3 I0226 00:12:00.487080 9 log.go:172] (0xc002e32210) (0xc001eac780) Stream removed, broadcasting: 5 I0226 00:12:00.487403 9 log.go:172] (0xc002e32210) Go away received Feb 26 00:12:00.487: INFO: Exec stderr: "" Feb 26 00:12:00.487: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:00.488: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:00.538835 9 log.go:172] (0xc002956370) (0xc0018683c0) Create stream I0226 00:12:00.539164 9 log.go:172] (0xc002956370) (0xc0018683c0) Stream added, broadcasting: 1 I0226 00:12:00.545140 9 log.go:172] (0xc002956370) Reply frame received for 1 I0226 00:12:00.545231 9 log.go:172] (0xc002956370) (0xc001eac820) Create stream I0226 00:12:00.545261 9 log.go:172] (0xc002956370) (0xc001eac820) Stream added, broadcasting: 3 I0226 00:12:00.546315 9 log.go:172] (0xc002956370) Reply frame received for 3 I0226 00:12:00.546346 9 log.go:172] (0xc002956370) (0xc001d90320) Create stream I0226 00:12:00.546365 9 log.go:172] (0xc002956370) (0xc001d90320) Stream added, broadcasting: 5 I0226 00:12:00.547964 9 log.go:172] (0xc002956370) Reply frame received for 5 I0226 00:12:00.661572 9 log.go:172] (0xc002956370) Data frame received for 3 I0226 00:12:00.661856 9 log.go:172] (0xc001eac820) (3) Data frame handling I0226 00:12:00.661924 9 log.go:172] (0xc001eac820) (3) Data frame sent I0226 00:12:00.753203 9 log.go:172] (0xc002956370) (0xc001eac820) Stream removed, broadcasting: 3 I0226 00:12:00.753348 9 log.go:172] (0xc002956370) Data frame received for 1 I0226 00:12:00.753361 9 log.go:172] (0xc0018683c0) (1) Data frame handling I0226 00:12:00.753373 9 log.go:172] (0xc0018683c0) (1) Data frame sent I0226 00:12:00.753402 9 log.go:172] (0xc002956370) (0xc0018683c0) Stream removed, broadcasting: 1 I0226 00:12:00.753522 9 log.go:172] (0xc002956370) (0xc001d90320) Stream removed, broadcasting: 5 I0226 00:12:00.753535 9 log.go:172] (0xc002956370) Go away received I0226 00:12:00.753892 9 log.go:172] (0xc002956370) (0xc0018683c0) Stream removed, broadcasting: 1 I0226 00:12:00.753905 9 log.go:172] (0xc002956370) (0xc001eac820) Stream removed, broadcasting: 3 I0226 00:12:00.753913 9 log.go:172] (0xc002956370) (0xc001d90320) Stream removed, broadcasting: 5 Feb 26 00:12:00.753: INFO: Exec stderr: "" Feb 26 00:12:00.754: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:00.754: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:00.804991 9 log.go:172] (0xc0029569a0) (0xc0018685a0) Create stream I0226 00:12:00.805190 9 log.go:172] (0xc0029569a0) (0xc0018685a0) Stream added, broadcasting: 1 I0226 00:12:00.810600 9 log.go:172] (0xc0029569a0) Reply frame received for 1 I0226 00:12:00.810708 9 log.go:172] (0xc0029569a0) (0xc001eac960) Create stream I0226 00:12:00.810722 9 log.go:172] (0xc0029569a0) (0xc001eac960) Stream added, broadcasting: 3 I0226 00:12:00.812396 9 log.go:172] (0xc0029569a0) Reply frame received for 3 I0226 00:12:00.812421 9 log.go:172] (0xc0029569a0) (0xc0023fa000) Create stream I0226 00:12:00.812433 9 log.go:172] (0xc0029569a0) (0xc0023fa000) Stream added, broadcasting: 5 I0226 00:12:00.813752 9 log.go:172] (0xc0029569a0) Reply frame received for 5 I0226 00:12:00.926512 9 log.go:172] (0xc0029569a0) Data frame received for 3 I0226 00:12:00.926647 9 log.go:172] (0xc001eac960) (3) Data frame handling I0226 00:12:00.926676 9 log.go:172] (0xc001eac960) (3) Data frame sent I0226 00:12:01.028048 9 log.go:172] (0xc0029569a0) Data frame received for 1 I0226 00:12:01.028261 9 log.go:172] (0xc0029569a0) (0xc0023fa000) Stream removed, broadcasting: 5 I0226 00:12:01.028463 9 log.go:172] (0xc0018685a0) (1) Data frame handling I0226 00:12:01.028507 9 log.go:172] (0xc0029569a0) (0xc001eac960) Stream removed, broadcasting: 3 I0226 00:12:01.028542 9 log.go:172] (0xc0018685a0) (1) Data frame sent I0226 00:12:01.028567 9 log.go:172] (0xc0029569a0) (0xc0018685a0) Stream removed, broadcasting: 1 I0226 00:12:01.028579 9 log.go:172] (0xc0029569a0) Go away received I0226 00:12:01.029408 9 log.go:172] (0xc0029569a0) (0xc0018685a0) Stream removed, broadcasting: 1 I0226 00:12:01.029431 9 log.go:172] (0xc0029569a0) (0xc001eac960) Stream removed, broadcasting: 3 I0226 00:12:01.029444 9 log.go:172] (0xc0029569a0) (0xc0023fa000) Stream removed, broadcasting: 5 Feb 26 00:12:01.029: INFO: Exec stderr: "" Feb 26 00:12:01.029: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:01.029: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:01.085359 9 log.go:172] (0xc002956fd0) (0xc001868820) Create stream I0226 00:12:01.085507 9 log.go:172] (0xc002956fd0) (0xc001868820) Stream added, broadcasting: 1 I0226 00:12:01.089581 9 log.go:172] (0xc002956fd0) Reply frame received for 1 I0226 00:12:01.089611 9 log.go:172] (0xc002956fd0) (0xc001e3e320) Create stream I0226 00:12:01.089618 9 log.go:172] (0xc002956fd0) (0xc001e3e320) Stream added, broadcasting: 3 I0226 00:12:01.090794 9 log.go:172] (0xc002956fd0) Reply frame received for 3 I0226 00:12:01.090818 9 log.go:172] (0xc002956fd0) (0xc001868aa0) Create stream I0226 00:12:01.090826 9 log.go:172] (0xc002956fd0) (0xc001868aa0) Stream added, broadcasting: 5 I0226 00:12:01.092716 9 log.go:172] (0xc002956fd0) Reply frame received for 5 I0226 00:12:01.165281 9 log.go:172] (0xc002956fd0) Data frame received for 3 I0226 00:12:01.165535 9 log.go:172] (0xc001e3e320) (3) Data frame handling I0226 00:12:01.165584 9 log.go:172] (0xc001e3e320) (3) Data frame sent I0226 00:12:01.283640 9 log.go:172] (0xc002956fd0) (0xc001868aa0) Stream removed, broadcasting: 5 I0226 00:12:01.284246 9 log.go:172] (0xc002956fd0) Data frame received for 1 I0226 00:12:01.284286 9 log.go:172] (0xc001868820) (1) Data frame handling I0226 00:12:01.284313 9 log.go:172] (0xc001868820) (1) Data frame sent I0226 00:12:01.284373 9 log.go:172] (0xc002956fd0) (0xc001868820) Stream removed, broadcasting: 1 I0226 00:12:01.284594 9 log.go:172] (0xc002956fd0) (0xc001e3e320) Stream removed, broadcasting: 3 I0226 00:12:01.284616 9 log.go:172] (0xc002956fd0) Go away received I0226 00:12:01.285639 9 log.go:172] (0xc002956fd0) (0xc001868820) Stream removed, broadcasting: 1 I0226 00:12:01.285800 9 log.go:172] (0xc002956fd0) (0xc001e3e320) Stream removed, broadcasting: 3 I0226 00:12:01.286012 9 log.go:172] (0xc002956fd0) (0xc001868aa0) Stream removed, broadcasting: 5 Feb 26 00:12:01.286: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 26 00:12:01.286: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:01.286: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:01.336493 9 log.go:172] (0xc002874fd0) (0xc001e3e8c0) Create stream I0226 00:12:01.336675 9 log.go:172] (0xc002874fd0) (0xc001e3e8c0) Stream added, broadcasting: 1 I0226 00:12:01.343653 9 log.go:172] (0xc002874fd0) Reply frame received for 1 I0226 00:12:01.343713 9 log.go:172] (0xc002874fd0) (0xc001868b40) Create stream I0226 00:12:01.343722 9 log.go:172] (0xc002874fd0) (0xc001868b40) Stream added, broadcasting: 3 I0226 00:12:01.345753 9 log.go:172] (0xc002874fd0) Reply frame received for 3 I0226 00:12:01.345838 9 log.go:172] (0xc002874fd0) (0xc001eacb40) Create stream I0226 00:12:01.345851 9 log.go:172] (0xc002874fd0) (0xc001eacb40) Stream added, broadcasting: 5 I0226 00:12:01.347330 9 log.go:172] (0xc002874fd0) Reply frame received for 5 I0226 00:12:01.428173 9 log.go:172] (0xc002874fd0) Data frame received for 3 I0226 00:12:01.428288 9 log.go:172] (0xc001868b40) (3) Data frame handling I0226 00:12:01.428333 9 log.go:172] (0xc001868b40) (3) Data frame sent I0226 00:12:01.511860 9 log.go:172] (0xc002874fd0) (0xc001868b40) Stream removed, broadcasting: 3 I0226 00:12:01.512083 9 log.go:172] (0xc002874fd0) Data frame received for 1 I0226 00:12:01.512113 9 log.go:172] (0xc001e3e8c0) (1) Data frame handling I0226 00:12:01.512147 9 log.go:172] (0xc001e3e8c0) (1) Data frame sent I0226 00:12:01.512170 9 log.go:172] (0xc002874fd0) (0xc001e3e8c0) Stream removed, broadcasting: 1 I0226 00:12:01.512217 9 log.go:172] (0xc002874fd0) (0xc001eacb40) Stream removed, broadcasting: 5 I0226 00:12:01.512355 9 log.go:172] (0xc002874fd0) Go away received I0226 00:12:01.512399 9 log.go:172] (0xc002874fd0) (0xc001e3e8c0) Stream removed, broadcasting: 1 I0226 00:12:01.512427 9 log.go:172] (0xc002874fd0) (0xc001868b40) Stream removed, broadcasting: 3 I0226 00:12:01.512492 9 log.go:172] (0xc002874fd0) (0xc001eacb40) Stream removed, broadcasting: 5 Feb 26 00:12:01.512: INFO: Exec stderr: "" Feb 26 00:12:01.512: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:01.512: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:01.554769 9 log.go:172] (0xc002957340) (0xc001868c80) Create stream I0226 00:12:01.554957 9 log.go:172] (0xc002957340) (0xc001868c80) Stream added, broadcasting: 1 I0226 00:12:01.559257 9 log.go:172] (0xc002957340) Reply frame received for 1 I0226 00:12:01.559365 9 log.go:172] (0xc002957340) (0xc001eacd20) Create stream I0226 00:12:01.559389 9 log.go:172] (0xc002957340) (0xc001eacd20) Stream added, broadcasting: 3 I0226 00:12:01.560737 9 log.go:172] (0xc002957340) Reply frame received for 3 I0226 00:12:01.560766 9 log.go:172] (0xc002957340) (0xc001868dc0) Create stream I0226 00:12:01.560779 9 log.go:172] (0xc002957340) (0xc001868dc0) Stream added, broadcasting: 5 I0226 00:12:01.562163 9 log.go:172] (0xc002957340) Reply frame received for 5 I0226 00:12:01.637708 9 log.go:172] (0xc002957340) Data frame received for 3 I0226 00:12:01.637797 9 log.go:172] (0xc001eacd20) (3) Data frame handling I0226 00:12:01.637822 9 log.go:172] (0xc001eacd20) (3) Data frame sent I0226 00:12:01.691014 9 log.go:172] (0xc002957340) (0xc001eacd20) Stream removed, broadcasting: 3 I0226 00:12:01.691155 9 log.go:172] (0xc002957340) Data frame received for 1 I0226 00:12:01.691178 9 log.go:172] (0xc001868c80) (1) Data frame handling I0226 00:12:01.691200 9 log.go:172] (0xc001868c80) (1) Data frame sent I0226 00:12:01.691259 9 log.go:172] (0xc002957340) (0xc001868c80) Stream removed, broadcasting: 1 I0226 00:12:01.691457 9 log.go:172] (0xc002957340) (0xc001868dc0) Stream removed, broadcasting: 5 I0226 00:12:01.691504 9 log.go:172] (0xc002957340) Go away received I0226 00:12:01.691525 9 log.go:172] (0xc002957340) (0xc001868c80) Stream removed, broadcasting: 1 I0226 00:12:01.691554 9 log.go:172] (0xc002957340) (0xc001eacd20) Stream removed, broadcasting: 3 I0226 00:12:01.691583 9 log.go:172] (0xc002957340) (0xc001868dc0) Stream removed, broadcasting: 5 Feb 26 00:12:01.691: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 26 00:12:01.691: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:01.691: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:01.727182 9 log.go:172] (0xc002875810) (0xc001e3eb40) Create stream I0226 00:12:01.727364 9 log.go:172] (0xc002875810) (0xc001e3eb40) Stream added, broadcasting: 1 I0226 00:12:01.730048 9 log.go:172] (0xc002875810) Reply frame received for 1 I0226 00:12:01.730115 9 log.go:172] (0xc002875810) (0xc001868f00) Create stream I0226 00:12:01.730131 9 log.go:172] (0xc002875810) (0xc001868f00) Stream added, broadcasting: 3 I0226 00:12:01.731374 9 log.go:172] (0xc002875810) Reply frame received for 3 I0226 00:12:01.731398 9 log.go:172] (0xc002875810) (0xc001e3edc0) Create stream I0226 00:12:01.731408 9 log.go:172] (0xc002875810) (0xc001e3edc0) Stream added, broadcasting: 5 I0226 00:12:01.732398 9 log.go:172] (0xc002875810) Reply frame received for 5 I0226 00:12:01.800989 9 log.go:172] (0xc002875810) Data frame received for 3 I0226 00:12:01.801121 9 log.go:172] (0xc001868f00) (3) Data frame handling I0226 00:12:01.801155 9 log.go:172] (0xc001868f00) (3) Data frame sent I0226 00:12:01.897190 9 log.go:172] (0xc002875810) Data frame received for 1 I0226 00:12:01.897362 9 log.go:172] (0xc002875810) (0xc001868f00) Stream removed, broadcasting: 3 I0226 00:12:01.897498 9 log.go:172] (0xc001e3eb40) (1) Data frame handling I0226 00:12:01.897536 9 log.go:172] (0xc001e3eb40) (1) Data frame sent I0226 00:12:01.897556 9 log.go:172] (0xc002875810) (0xc001e3eb40) Stream removed, broadcasting: 1 I0226 00:12:01.898109 9 log.go:172] (0xc002875810) (0xc001e3edc0) Stream removed, broadcasting: 5 I0226 00:12:01.898334 9 log.go:172] (0xc002875810) Go away received I0226 00:12:01.898893 9 log.go:172] (0xc002875810) (0xc001e3eb40) Stream removed, broadcasting: 1 I0226 00:12:01.898956 9 log.go:172] (0xc002875810) (0xc001868f00) Stream removed, broadcasting: 3 I0226 00:12:01.898974 9 log.go:172] (0xc002875810) (0xc001e3edc0) Stream removed, broadcasting: 5 Feb 26 00:12:01.899: INFO: Exec stderr: "" Feb 26 00:12:01.899: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:01.899: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:01.957700 9 log.go:172] (0xc0027c2370) (0xc001d906e0) Create stream I0226 00:12:01.957941 9 log.go:172] (0xc0027c2370) (0xc001d906e0) Stream added, broadcasting: 1 I0226 00:12:01.960547 9 log.go:172] (0xc0027c2370) Reply frame received for 1 I0226 00:12:01.960626 9 log.go:172] (0xc0027c2370) (0xc001869220) Create stream I0226 00:12:01.960638 9 log.go:172] (0xc0027c2370) (0xc001869220) Stream added, broadcasting: 3 I0226 00:12:01.962941 9 log.go:172] (0xc0027c2370) Reply frame received for 3 I0226 00:12:01.963160 9 log.go:172] (0xc0027c2370) (0xc00249a000) Create stream I0226 00:12:01.963229 9 log.go:172] (0xc0027c2370) (0xc00249a000) Stream added, broadcasting: 5 I0226 00:12:01.966818 9 log.go:172] (0xc0027c2370) Reply frame received for 5 I0226 00:12:02.048335 9 log.go:172] (0xc0027c2370) Data frame received for 3 I0226 00:12:02.048690 9 log.go:172] (0xc001869220) (3) Data frame handling I0226 00:12:02.048743 9 log.go:172] (0xc001869220) (3) Data frame sent I0226 00:12:02.189003 9 log.go:172] (0xc0027c2370) (0xc00249a000) Stream removed, broadcasting: 5 I0226 00:12:02.189300 9 log.go:172] (0xc0027c2370) Data frame received for 1 I0226 00:12:02.189355 9 log.go:172] (0xc0027c2370) (0xc001869220) Stream removed, broadcasting: 3 I0226 00:12:02.189976 9 log.go:172] (0xc001d906e0) (1) Data frame handling I0226 00:12:02.190496 9 log.go:172] (0xc001d906e0) (1) Data frame sent I0226 00:12:02.190766 9 log.go:172] (0xc0027c2370) (0xc001d906e0) Stream removed, broadcasting: 1 I0226 00:12:02.190906 9 log.go:172] (0xc0027c2370) Go away received I0226 00:12:02.191348 9 log.go:172] (0xc0027c2370) (0xc001d906e0) Stream removed, broadcasting: 1 I0226 00:12:02.191405 9 log.go:172] (0xc0027c2370) (0xc001869220) Stream removed, broadcasting: 3 I0226 00:12:02.191463 9 log.go:172] (0xc0027c2370) (0xc00249a000) Stream removed, broadcasting: 5 Feb 26 00:12:02.191: INFO: Exec stderr: "" Feb 26 00:12:02.191: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:02.192: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:02.245074 9 log.go:172] (0xc001944370) (0xc00249a460) Create stream I0226 00:12:02.245474 9 log.go:172] (0xc001944370) (0xc00249a460) Stream added, broadcasting: 1 I0226 00:12:02.253438 9 log.go:172] (0xc001944370) Reply frame received for 1 I0226 00:12:02.253697 9 log.go:172] (0xc001944370) (0xc001eacdc0) Create stream I0226 00:12:02.253781 9 log.go:172] (0xc001944370) (0xc001eacdc0) Stream added, broadcasting: 3 I0226 00:12:02.267597 9 log.go:172] (0xc001944370) Reply frame received for 3 I0226 00:12:02.267744 9 log.go:172] (0xc001944370) (0xc001e3ee60) Create stream I0226 00:12:02.267789 9 log.go:172] (0xc001944370) (0xc001e3ee60) Stream added, broadcasting: 5 I0226 00:12:02.270306 9 log.go:172] (0xc001944370) Reply frame received for 5 I0226 00:12:02.365818 9 log.go:172] (0xc001944370) Data frame received for 3 I0226 00:12:02.365903 9 log.go:172] (0xc001eacdc0) (3) Data frame handling I0226 00:12:02.365932 9 log.go:172] (0xc001eacdc0) (3) Data frame sent I0226 00:12:02.441754 9 log.go:172] (0xc001944370) (0xc001eacdc0) Stream removed, broadcasting: 3 I0226 00:12:02.442059 9 log.go:172] (0xc001944370) Data frame received for 1 I0226 00:12:02.442087 9 log.go:172] (0xc00249a460) (1) Data frame handling I0226 00:12:02.442109 9 log.go:172] (0xc00249a460) (1) Data frame sent I0226 00:12:02.442124 9 log.go:172] (0xc001944370) (0xc00249a460) Stream removed, broadcasting: 1 I0226 00:12:02.442430 9 log.go:172] (0xc001944370) (0xc001e3ee60) Stream removed, broadcasting: 5 I0226 00:12:02.442503 9 log.go:172] (0xc001944370) (0xc00249a460) Stream removed, broadcasting: 1 I0226 00:12:02.442640 9 log.go:172] (0xc001944370) (0xc001eacdc0) Stream removed, broadcasting: 3 I0226 00:12:02.442688 9 log.go:172] (0xc001944370) (0xc001e3ee60) Stream removed, broadcasting: 5 I0226 00:12:02.442892 9 log.go:172] (0xc001944370) Go away received Feb 26 00:12:02.443: INFO: Exec stderr: "" Feb 26 00:12:02.443: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1135 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:12:02.444: INFO: >>> kubeConfig: /root/.kube/config I0226 00:12:02.483575 9 log.go:172] (0xc002e329a0) (0xc001ead0e0) Create stream I0226 00:12:02.483774 9 log.go:172] (0xc002e329a0) (0xc001ead0e0) Stream added, broadcasting: 1 I0226 00:12:02.498603 9 log.go:172] (0xc002e329a0) Reply frame received for 1 I0226 00:12:02.498794 9 log.go:172] (0xc002e329a0) (0xc001869360) Create stream I0226 00:12:02.498818 9 log.go:172] (0xc002e329a0) (0xc001869360) Stream added, broadcasting: 3 I0226 00:12:02.500379 9 log.go:172] (0xc002e329a0) Reply frame received for 3 I0226 00:12:02.500456 9 log.go:172] (0xc002e329a0) (0xc00249a500) Create stream I0226 00:12:02.500479 9 log.go:172] (0xc002e329a0) (0xc00249a500) Stream added, broadcasting: 5 I0226 00:12:02.502005 9 log.go:172] (0xc002e329a0) Reply frame received for 5 I0226 00:12:02.609136 9 log.go:172] (0xc002e329a0) Data frame received for 3 I0226 00:12:02.609561 9 log.go:172] (0xc001869360) (3) Data frame handling I0226 00:12:02.609611 9 log.go:172] (0xc001869360) (3) Data frame sent I0226 00:12:02.702884 9 log.go:172] (0xc002e329a0) (0xc00249a500) Stream removed, broadcasting: 5 I0226 00:12:02.703132 9 log.go:172] (0xc002e329a0) Data frame received for 1 I0226 00:12:02.703210 9 log.go:172] (0xc002e329a0) (0xc001869360) Stream removed, broadcasting: 3 I0226 00:12:02.703558 9 log.go:172] (0xc001ead0e0) (1) Data frame handling I0226 00:12:02.703730 9 log.go:172] (0xc001ead0e0) (1) Data frame sent I0226 00:12:02.703797 9 log.go:172] (0xc002e329a0) (0xc001ead0e0) Stream removed, broadcasting: 1 I0226 00:12:02.703856 9 log.go:172] (0xc002e329a0) Go away received I0226 00:12:02.704232 9 log.go:172] (0xc002e329a0) (0xc001ead0e0) Stream removed, broadcasting: 1 I0226 00:12:02.704253 9 log.go:172] (0xc002e329a0) (0xc001869360) Stream removed, broadcasting: 3 I0226 00:12:02.704267 9 log.go:172] (0xc002e329a0) (0xc00249a500) Stream removed, broadcasting: 5 Feb 26 00:12:02.704: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:12:02.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1135" for this suite. • [SLOW TEST:24.675 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":66,"skipped":1196,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:12:02.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:12:03.750: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:12:06.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:12:08.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:12:10.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272723, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:12:13.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 26 00:12:23.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4927 to-be-attached-pod -i -c=container1' Feb 26 00:12:23.878: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:12:23.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4927" for this suite. STEP: Destroying namespace "webhook-4927-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:21.396 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":67,"skipped":1206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:12:24.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-faf7fad7-1ae5-4df5-bdbf-5ef35ac45ac4 STEP: Creating secret with name s-test-opt-upd-990c1710-c459-4bbe-9dad-00d5feb7c613 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-faf7fad7-1ae5-4df5-bdbf-5ef35ac45ac4 STEP: Updating secret s-test-opt-upd-990c1710-c459-4bbe-9dad-00d5feb7c613 STEP: Creating secret with name s-test-opt-create-6e795d16-0181-4e85-ae89-5d0069286756 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:12:42.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1143" for this suite. • [SLOW TEST:18.566 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":68,"skipped":1232,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:12:42.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:12:52.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-705" for this suite. STEP: Destroying namespace "nsdeletetest-7387" for this suite. Feb 26 00:12:52.635: INFO: Namespace nsdeletetest-7387 was already deleted STEP: Destroying namespace "nsdeletetest-2591" for this suite. • [SLOW TEST:10.012 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":69,"skipped":1233,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:12:52.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 26 00:13:03.808: INFO: Successfully updated pod "annotationupdate025f8396-806d-423b-833d-76633eb8b53f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:13:05.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5344" for this suite. • [SLOW TEST:13.221 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":70,"skipped":1239,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:13:05.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-35bff9ce-46b0-4a18-81a0-0b4393780d1b STEP: Creating a pod to test consume configMaps Feb 26 00:13:06.055: INFO: Waiting up to 5m0s for pod "pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac" in namespace "configmap-1025" to be "success or failure" Feb 26 00:13:06.084: INFO: Pod "pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac": Phase="Pending", Reason="", readiness=false. Elapsed: 29.175956ms Feb 26 00:13:08.091: INFO: Pod "pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03604808s Feb 26 00:13:10.096: INFO: Pod "pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04032431s Feb 26 00:13:12.102: INFO: Pod "pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046761695s Feb 26 00:13:14.110: INFO: Pod "pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054435445s Feb 26 00:13:16.129: INFO: Pod "pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073218416s STEP: Saw pod success Feb 26 00:13:16.129: INFO: Pod "pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac" satisfied condition "success or failure" Feb 26 00:13:16.138: INFO: Trying to get logs from node jerma-node pod pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac container configmap-volume-test: STEP: delete the pod Feb 26 00:13:16.358: INFO: Waiting for pod pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac to disappear Feb 26 00:13:16.424: INFO: Pod pod-configmaps-dce99a65-ab8b-48a9-a570-d70fa98631ac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:13:16.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1025" for this suite. • [SLOW TEST:10.514 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":71,"skipped":1247,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:13:16.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:13:17.177: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:13:19.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:13:22.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:13:23.764: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:13:25.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272797, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:13:28.799: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:13:29.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1288" for this suite. STEP: Destroying namespace "webhook-1288-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.860 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":72,"skipped":1254,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:13:29.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 26 00:13:29.376: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:13:44.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4383" for this suite. • [SLOW TEST:15.777 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":73,"skipped":1256,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:13:45.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Feb 26 00:13:45.217: INFO: Waiting up to 5m0s for pod "var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625" in namespace "var-expansion-9820" to be "success or failure" Feb 26 00:13:45.242: INFO: Pod "var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625": Phase="Pending", Reason="", readiness=false. Elapsed: 24.056179ms Feb 26 00:13:47.249: INFO: Pod "var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031818298s Feb 26 00:13:49.254: INFO: Pod "var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036312458s Feb 26 00:13:51.277: INFO: Pod "var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059165268s Feb 26 00:13:53.285: INFO: Pod "var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067686688s STEP: Saw pod success Feb 26 00:13:53.285: INFO: Pod "var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625" satisfied condition "success or failure" Feb 26 00:13:53.291: INFO: Trying to get logs from node jerma-node pod var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625 container dapi-container: STEP: delete the pod Feb 26 00:13:53.373: INFO: Waiting for pod var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625 to disappear Feb 26 00:13:53.390: INFO: Pod var-expansion-0f6a04ec-45fa-49ce-92e5-cc644dad7625 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:13:53.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9820" for this suite. • [SLOW TEST:8.327 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":74,"skipped":1259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:13:53.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 26 00:13:53.609: INFO: Waiting up to 5m0s for pod "downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b" in namespace "downward-api-737" to be "success or failure" Feb 26 00:13:53.639: INFO: Pod "downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.365704ms Feb 26 00:13:55.648: INFO: Pod "downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03898718s Feb 26 00:13:57.670: INFO: Pod "downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061226145s Feb 26 00:13:59.677: INFO: Pod "downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067699858s Feb 26 00:14:01.684: INFO: Pod "downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074745229s Feb 26 00:14:03.696: INFO: Pod "downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086875966s STEP: Saw pod success Feb 26 00:14:03.696: INFO: Pod "downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b" satisfied condition "success or failure" Feb 26 00:14:03.702: INFO: Trying to get logs from node jerma-node pod downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b container dapi-container: STEP: delete the pod Feb 26 00:14:03.844: INFO: Waiting for pod downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b to disappear Feb 26 00:14:03.852: INFO: Pod downward-api-f1e85e27-f4bc-4791-8564-6f7520b70d8b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:14:03.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-737" for this suite. • [SLOW TEST:10.470 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":75,"skipped":1295,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:14:03.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 26 00:14:04.083: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb" in namespace "projected-5564" to be "success or failure" Feb 26 00:14:04.110: INFO: Pod "downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.350757ms Feb 26 00:14:06.118: INFO: Pod "downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034324721s Feb 26 00:14:08.125: INFO: Pod "downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041263639s Feb 26 00:14:10.134: INFO: Pod "downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050166305s Feb 26 00:14:12.142: INFO: Pod "downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058911453s Feb 26 00:14:14.150: INFO: Pod "downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066263045s STEP: Saw pod success Feb 26 00:14:14.150: INFO: Pod "downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb" satisfied condition "success or failure" Feb 26 00:14:14.152: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb container client-container: STEP: delete the pod Feb 26 00:14:14.199: INFO: Waiting for pod downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb to disappear Feb 26 00:14:14.205: INFO: Pod downwardapi-volume-c834b98a-3633-4a41-86d2-62831e6c60cb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:14:14.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5564" for this suite. • [SLOW TEST:10.341 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":76,"skipped":1303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:14:14.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-835 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-835 STEP: creating replication controller externalsvc in namespace services-835 I0226 00:14:14.420656 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-835, replica count: 2 I0226 00:14:17.472756 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:14:20.473881 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:14:23.474746 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:14:26.475634 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Feb 26 00:14:26.645: INFO: Creating new exec pod Feb 26 00:14:34.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-835 execpoddl27t -- /bin/sh -x -c nslookup nodeport-service' Feb 26 00:14:37.010: INFO: stderr: "I0226 00:14:36.804432 1005 log.go:172] (0xc00056d340) (0xc0008b80a0) Create stream\nI0226 00:14:36.804602 1005 log.go:172] (0xc00056d340) (0xc0008b80a0) Stream added, broadcasting: 1\nI0226 00:14:36.809636 1005 log.go:172] (0xc00056d340) Reply frame received for 1\nI0226 00:14:36.809689 1005 log.go:172] (0xc00056d340) (0xc00062bb80) Create stream\nI0226 00:14:36.809701 1005 log.go:172] (0xc00056d340) (0xc00062bb80) Stream added, broadcasting: 3\nI0226 00:14:36.812929 1005 log.go:172] (0xc00056d340) Reply frame received for 3\nI0226 00:14:36.813045 1005 log.go:172] (0xc00056d340) (0xc00062bd60) Create stream\nI0226 00:14:36.813061 1005 log.go:172] (0xc00056d340) (0xc00062bd60) Stream added, broadcasting: 5\nI0226 00:14:36.815471 1005 log.go:172] (0xc00056d340) Reply frame received for 5\nI0226 00:14:36.904874 1005 log.go:172] (0xc00056d340) Data frame received for 5\nI0226 00:14:36.904934 1005 log.go:172] (0xc00062bd60) (5) Data frame handling\nI0226 00:14:36.904972 1005 log.go:172] (0xc00062bd60) (5) Data frame sent\n+ nslookup nodeport-service\nI0226 00:14:36.923032 1005 log.go:172] (0xc00056d340) Data frame received for 3\nI0226 00:14:36.923063 1005 log.go:172] (0xc00062bb80) (3) Data frame handling\nI0226 00:14:36.923082 1005 log.go:172] (0xc00062bb80) (3) Data frame sent\nI0226 00:14:36.925038 1005 log.go:172] (0xc00056d340) Data frame received for 3\nI0226 00:14:36.925050 1005 log.go:172] (0xc00062bb80) (3) Data frame handling\nI0226 00:14:36.925064 1005 log.go:172] (0xc00062bb80) (3) Data frame sent\nI0226 00:14:37.002067 1005 log.go:172] (0xc00056d340) Data frame received for 1\nI0226 00:14:37.002125 1005 log.go:172] (0xc00056d340) (0xc00062bd60) Stream removed, broadcasting: 5\nI0226 00:14:37.002256 1005 log.go:172] (0xc0008b80a0) (1) Data frame handling\nI0226 00:14:37.002299 1005 log.go:172] (0xc0008b80a0) (1) Data frame sent\nI0226 00:14:37.002333 1005 log.go:172] (0xc00056d340) (0xc00062bb80) Stream removed, broadcasting: 3\nI0226 00:14:37.002358 1005 log.go:172] (0xc00056d340) (0xc0008b80a0) Stream removed, broadcasting: 1\nI0226 00:14:37.002373 1005 log.go:172] (0xc00056d340) Go away received\nI0226 00:14:37.003141 1005 log.go:172] (0xc00056d340) (0xc0008b80a0) Stream removed, broadcasting: 1\nI0226 00:14:37.003152 1005 log.go:172] (0xc00056d340) (0xc00062bb80) Stream removed, broadcasting: 3\nI0226 00:14:37.003157 1005 log.go:172] (0xc00056d340) (0xc00062bd60) Stream removed, broadcasting: 5\n" Feb 26 00:14:37.010: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-835.svc.cluster.local\tcanonical name = externalsvc.services-835.svc.cluster.local.\nName:\texternalsvc.services-835.svc.cluster.local\nAddress: 10.96.32.233\n\n" STEP: deleting ReplicationController externalsvc in namespace services-835, will wait for the garbage collector to delete the pods Feb 26 00:14:37.078: INFO: Deleting ReplicationController externalsvc took: 12.087721ms Feb 26 00:14:37.379: INFO: Terminating ReplicationController externalsvc pods took: 300.678132ms Feb 26 00:14:53.997: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:14:54.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-835" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:39.823 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":77,"skipped":1326,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:14:54.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:15:05.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7916" for this suite. • [SLOW TEST:11.100 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":78,"skipped":1344,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:15:05.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-9ba3b71c-a2be-4b57-ac05-2513b5c704bf STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-9ba3b71c-a2be-4b57-ac05-2513b5c704bf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:16:38.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4314" for this suite. • [SLOW TEST:93.535 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":79,"skipped":1352,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:16:38.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:16:39.587: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:16:41.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:16:43.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:16:45.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:16:47.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:16:49.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718272999, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:16:52.655: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:16:52.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4605" for this suite. STEP: Destroying namespace "webhook-4605-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.390 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":80,"skipped":1354,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:16:53.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5602, will wait for the garbage collector to delete the pods Feb 26 00:17:05.253: INFO: Deleting Job.batch foo took: 52.35993ms Feb 26 00:17:05.653: INFO: Terminating Job.batch foo pods took: 400.506321ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:17:52.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5602" for this suite. • [SLOW TEST:59.521 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":81,"skipped":1376,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:17:52.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service nodeport-test with type=NodePort in namespace services-5010 STEP: creating replication controller nodeport-test in namespace services-5010 I0226 00:17:52.840111 9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-5010, replica count: 2 I0226 00:17:55.893021 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:17:58.893886 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:18:01.895561 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:18:04.897010 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 00:18:07.898164 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 26 00:18:07.898: INFO: Creating new exec pod Feb 26 00:18:16.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5010 execpodgbnwg -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Feb 26 00:18:17.281: INFO: stderr: "I0226 00:18:17.111256 1033 log.go:172] (0xc000bd8370) (0xc000bd0140) Create stream\nI0226 00:18:17.111375 1033 log.go:172] (0xc000bd8370) (0xc000bd0140) Stream added, broadcasting: 1\nI0226 00:18:17.114409 1033 log.go:172] (0xc000bd8370) Reply frame received for 1\nI0226 00:18:17.114446 1033 log.go:172] (0xc000bd8370) (0xc000bae280) Create stream\nI0226 00:18:17.114457 1033 log.go:172] (0xc000bd8370) (0xc000bae280) Stream added, broadcasting: 3\nI0226 00:18:17.115510 1033 log.go:172] (0xc000bd8370) Reply frame received for 3\nI0226 00:18:17.115532 1033 log.go:172] (0xc000bd8370) (0xc000bae320) Create stream\nI0226 00:18:17.115542 1033 log.go:172] (0xc000bd8370) (0xc000bae320) Stream added, broadcasting: 5\nI0226 00:18:17.116794 1033 log.go:172] (0xc000bd8370) Reply frame received for 5\nI0226 00:18:17.177457 1033 log.go:172] (0xc000bd8370) Data frame received for 5\nI0226 00:18:17.177711 1033 log.go:172] (0xc000bae320) (5) Data frame handling\nI0226 00:18:17.177749 1033 log.go:172] (0xc000bae320) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0226 00:18:17.181229 1033 log.go:172] (0xc000bd8370) Data frame received for 5\nI0226 00:18:17.181244 1033 log.go:172] (0xc000bae320) (5) Data frame handling\nI0226 00:18:17.181253 1033 log.go:172] (0xc000bae320) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0226 00:18:17.265483 1033 log.go:172] (0xc000bd8370) (0xc000bae280) Stream removed, broadcasting: 3\nI0226 00:18:17.265726 1033 log.go:172] (0xc000bd8370) Data frame received for 1\nI0226 00:18:17.265752 1033 log.go:172] (0xc000bd0140) (1) Data frame handling\nI0226 00:18:17.265770 1033 log.go:172] (0xc000bd0140) (1) Data frame sent\nI0226 00:18:17.265790 1033 log.go:172] (0xc000bd8370) (0xc000bd0140) Stream removed, broadcasting: 1\nI0226 00:18:17.266485 1033 log.go:172] (0xc000bd8370) (0xc000bae320) Stream removed, broadcasting: 5\nI0226 00:18:17.268047 1033 log.go:172] (0xc000bd8370) (0xc000bd0140) Stream removed, broadcasting: 1\nI0226 00:18:17.268207 1033 log.go:172] (0xc000bd8370) (0xc000bae280) Stream removed, broadcasting: 3\nI0226 00:18:17.268270 1033 log.go:172] (0xc000bd8370) (0xc000bae320) Stream removed, broadcasting: 5\n" Feb 26 00:18:17.282: INFO: stdout: "" Feb 26 00:18:17.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5010 execpodgbnwg -- /bin/sh -x -c nc -zv -t -w 2 10.96.134.238 80' Feb 26 00:18:17.644: INFO: stderr: "I0226 00:18:17.428968 1053 log.go:172] (0xc000a10000) (0xc000a0e000) Create stream\nI0226 00:18:17.429038 1053 log.go:172] (0xc000a10000) (0xc000a0e000) Stream added, broadcasting: 1\nI0226 00:18:17.432458 1053 log.go:172] (0xc000a10000) Reply frame received for 1\nI0226 00:18:17.432521 1053 log.go:172] (0xc000a10000) (0xc00066bd60) Create stream\nI0226 00:18:17.432529 1053 log.go:172] (0xc000a10000) (0xc00066bd60) Stream added, broadcasting: 3\nI0226 00:18:17.434069 1053 log.go:172] (0xc000a10000) Reply frame received for 3\nI0226 00:18:17.434098 1053 log.go:172] (0xc000a10000) (0xc00066be00) Create stream\nI0226 00:18:17.434106 1053 log.go:172] (0xc000a10000) (0xc00066be00) Stream added, broadcasting: 5\nI0226 00:18:17.435758 1053 log.go:172] (0xc000a10000) Reply frame received for 5\nI0226 00:18:17.520205 1053 log.go:172] (0xc000a10000) Data frame received for 5\nI0226 00:18:17.520279 1053 log.go:172] (0xc00066be00) (5) Data frame handling\nI0226 00:18:17.520322 1053 log.go:172] (0xc00066be00) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.134.238 80\nConnection to 10.96.134.238 80 port [tcp/http] succeeded!\nI0226 00:18:17.636083 1053 log.go:172] (0xc000a10000) Data frame received for 1\nI0226 00:18:17.636371 1053 log.go:172] (0xc000a10000) (0xc00066be00) Stream removed, broadcasting: 5\nI0226 00:18:17.636452 1053 log.go:172] (0xc000a0e000) (1) Data frame handling\nI0226 00:18:17.636558 1053 log.go:172] (0xc000a0e000) (1) Data frame sent\nI0226 00:18:17.636628 1053 log.go:172] (0xc000a10000) (0xc00066bd60) Stream removed, broadcasting: 3\nI0226 00:18:17.636665 1053 log.go:172] (0xc000a10000) (0xc000a0e000) Stream removed, broadcasting: 1\nI0226 00:18:17.636719 1053 log.go:172] (0xc000a10000) Go away received\nI0226 00:18:17.637235 1053 log.go:172] (0xc000a10000) (0xc000a0e000) Stream removed, broadcasting: 1\nI0226 00:18:17.637252 1053 log.go:172] (0xc000a10000) (0xc00066bd60) Stream removed, broadcasting: 3\nI0226 00:18:17.637261 1053 log.go:172] (0xc000a10000) (0xc00066be00) Stream removed, broadcasting: 5\n" Feb 26 00:18:17.644: INFO: stdout: "" Feb 26 00:18:17.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5010 execpodgbnwg -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30763' Feb 26 00:18:17.928: INFO: stderr: "I0226 00:18:17.764345 1073 log.go:172] (0xc0000f5340) (0xc000a781e0) Create stream\nI0226 00:18:17.764410 1073 log.go:172] (0xc0000f5340) (0xc000a781e0) Stream added, broadcasting: 1\nI0226 00:18:17.768215 1073 log.go:172] (0xc0000f5340) Reply frame received for 1\nI0226 00:18:17.768246 1073 log.go:172] (0xc0000f5340) (0xc000a8a0a0) Create stream\nI0226 00:18:17.768260 1073 log.go:172] (0xc0000f5340) (0xc000a8a0a0) Stream added, broadcasting: 3\nI0226 00:18:17.769498 1073 log.go:172] (0xc0000f5340) Reply frame received for 3\nI0226 00:18:17.769523 1073 log.go:172] (0xc0000f5340) (0xc000687e00) Create stream\nI0226 00:18:17.769534 1073 log.go:172] (0xc0000f5340) (0xc000687e00) Stream added, broadcasting: 5\nI0226 00:18:17.770745 1073 log.go:172] (0xc0000f5340) Reply frame received for 5\nI0226 00:18:17.847080 1073 log.go:172] (0xc0000f5340) Data frame received for 5\nI0226 00:18:17.847138 1073 log.go:172] (0xc000687e00) (5) Data frame handling\nI0226 00:18:17.847161 1073 log.go:172] (0xc000687e00) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30763\nI0226 00:18:17.848016 1073 log.go:172] (0xc0000f5340) Data frame received for 5\nI0226 00:18:17.848029 1073 log.go:172] (0xc000687e00) (5) Data frame handling\nI0226 00:18:17.848036 1073 log.go:172] (0xc000687e00) (5) Data frame sent\nConnection to 10.96.2.250 30763 port [tcp/30763] succeeded!\nI0226 00:18:17.921075 1073 log.go:172] (0xc0000f5340) (0xc000a8a0a0) Stream removed, broadcasting: 3\nI0226 00:18:17.921146 1073 log.go:172] (0xc0000f5340) Data frame received for 1\nI0226 00:18:17.921167 1073 log.go:172] (0xc000a781e0) (1) Data frame handling\nI0226 00:18:17.921183 1073 log.go:172] (0xc000a781e0) (1) Data frame sent\nI0226 00:18:17.921194 1073 log.go:172] (0xc0000f5340) (0xc000687e00) Stream removed, broadcasting: 5\nI0226 00:18:17.921265 1073 log.go:172] (0xc0000f5340) (0xc000a781e0) Stream removed, broadcasting: 1\nI0226 00:18:17.921279 1073 log.go:172] (0xc0000f5340) Go away received\nI0226 00:18:17.921886 1073 log.go:172] (0xc0000f5340) (0xc000a781e0) Stream removed, broadcasting: 1\nI0226 00:18:17.921902 1073 log.go:172] (0xc0000f5340) (0xc000a8a0a0) Stream removed, broadcasting: 3\nI0226 00:18:17.921909 1073 log.go:172] (0xc0000f5340) (0xc000687e00) Stream removed, broadcasting: 5\n" Feb 26 00:18:17.928: INFO: stdout: "" Feb 26 00:18:17.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5010 execpodgbnwg -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30763' Feb 26 00:18:18.249: INFO: stderr: "I0226 00:18:18.064760 1092 log.go:172] (0xc0008fe8f0) (0xc0009363c0) Create stream\nI0226 00:18:18.064922 1092 log.go:172] (0xc0008fe8f0) (0xc0009363c0) Stream added, broadcasting: 1\nI0226 00:18:18.077782 1092 log.go:172] (0xc0008fe8f0) Reply frame received for 1\nI0226 00:18:18.077831 1092 log.go:172] (0xc0008fe8f0) (0xc0005e7c20) Create stream\nI0226 00:18:18.077842 1092 log.go:172] (0xc0008fe8f0) (0xc0005e7c20) Stream added, broadcasting: 3\nI0226 00:18:18.079176 1092 log.go:172] (0xc0008fe8f0) Reply frame received for 3\nI0226 00:18:18.079197 1092 log.go:172] (0xc0008fe8f0) (0xc000582820) Create stream\nI0226 00:18:18.079204 1092 log.go:172] (0xc0008fe8f0) (0xc000582820) Stream added, broadcasting: 5\nI0226 00:18:18.080349 1092 log.go:172] (0xc0008fe8f0) Reply frame received for 5\nI0226 00:18:18.159466 1092 log.go:172] (0xc0008fe8f0) Data frame received for 5\nI0226 00:18:18.159785 1092 log.go:172] (0xc000582820) (5) Data frame handling\nI0226 00:18:18.159830 1092 log.go:172] (0xc000582820) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30763\nI0226 00:18:18.164966 1092 log.go:172] (0xc0008fe8f0) Data frame received for 5\nI0226 00:18:18.164995 1092 log.go:172] (0xc000582820) (5) Data frame handling\nI0226 00:18:18.165034 1092 log.go:172] (0xc000582820) (5) Data frame sent\nConnection to 10.96.1.234 30763 port [tcp/30763] succeeded!\nI0226 00:18:18.236756 1092 log.go:172] (0xc0008fe8f0) Data frame received for 1\nI0226 00:18:18.236910 1092 log.go:172] (0xc0009363c0) (1) Data frame handling\nI0226 00:18:18.236985 1092 log.go:172] (0xc0009363c0) (1) Data frame sent\nI0226 00:18:18.237135 1092 log.go:172] (0xc0008fe8f0) (0xc0009363c0) Stream removed, broadcasting: 1\nI0226 00:18:18.237623 1092 log.go:172] (0xc0008fe8f0) (0xc0005e7c20) Stream removed, broadcasting: 3\nI0226 00:18:18.239079 1092 log.go:172] (0xc0008fe8f0) (0xc000582820) Stream removed, broadcasting: 5\nI0226 00:18:18.239183 1092 log.go:172] (0xc0008fe8f0) Go away received\nI0226 00:18:18.239335 1092 log.go:172] (0xc0008fe8f0) (0xc0009363c0) Stream removed, broadcasting: 1\nI0226 00:18:18.239383 1092 log.go:172] (0xc0008fe8f0) (0xc0005e7c20) Stream removed, broadcasting: 3\nI0226 00:18:18.239401 1092 log.go:172] (0xc0008fe8f0) (0xc000582820) Stream removed, broadcasting: 5\n" Feb 26 00:18:18.249: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:18:18.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5010" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.688 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":82,"skipped":1385,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:18:18.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-1895 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 26 00:18:18.472: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 26 00:18:18.614: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 26 00:18:21.151: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 26 00:18:22.623: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 26 00:18:25.655: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 26 00:18:26.827: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 26 00:18:28.625: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 26 00:18:31.075: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 26 00:18:32.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 26 00:18:34.625: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 26 00:18:36.628: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 26 00:18:38.623: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 26 00:18:40.625: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 26 00:18:42.623: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 26 00:18:44.624: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 26 00:18:44.632: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 26 00:18:46.639: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 26 00:18:54.693: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.44.0.3&port=8080&tries=1'] Namespace:pod-network-test-1895 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:18:54.693: INFO: >>> kubeConfig: /root/.kube/config I0226 00:18:54.738853 9 log.go:172] (0xc002e32630) (0xc001efcbe0) Create stream I0226 00:18:54.738958 9 log.go:172] (0xc002e32630) (0xc001efcbe0) Stream added, broadcasting: 1 I0226 00:18:54.742834 9 log.go:172] (0xc002e32630) Reply frame received for 1 I0226 00:18:54.742869 9 log.go:172] (0xc002e32630) (0xc001eacd20) Create stream I0226 00:18:54.742888 9 log.go:172] (0xc002e32630) (0xc001eacd20) Stream added, broadcasting: 3 I0226 00:18:54.744219 9 log.go:172] (0xc002e32630) Reply frame received for 3 I0226 00:18:54.744246 9 log.go:172] (0xc002e32630) (0xc001eacdc0) Create stream I0226 00:18:54.744257 9 log.go:172] (0xc002e32630) (0xc001eacdc0) Stream added, broadcasting: 5 I0226 00:18:54.745372 9 log.go:172] (0xc002e32630) Reply frame received for 5 I0226 00:18:54.863318 9 log.go:172] (0xc002e32630) Data frame received for 3 I0226 00:18:54.863439 9 log.go:172] (0xc001eacd20) (3) Data frame handling I0226 00:18:54.863483 9 log.go:172] (0xc001eacd20) (3) Data frame sent I0226 00:18:54.967566 9 log.go:172] (0xc002e32630) Data frame received for 1 I0226 00:18:54.967795 9 log.go:172] (0xc001efcbe0) (1) Data frame handling I0226 00:18:54.967828 9 log.go:172] (0xc001efcbe0) (1) Data frame sent I0226 00:18:54.967855 9 log.go:172] (0xc002e32630) (0xc001efcbe0) Stream removed, broadcasting: 1 I0226 00:18:54.968317 9 log.go:172] (0xc002e32630) (0xc001eacd20) Stream removed, broadcasting: 3 I0226 00:18:54.968477 9 log.go:172] (0xc002e32630) (0xc001eacdc0) Stream removed, broadcasting: 5 I0226 00:18:54.968599 9 log.go:172] (0xc002e32630) (0xc001efcbe0) Stream removed, broadcasting: 1 I0226 00:18:54.968679 9 log.go:172] (0xc002e32630) (0xc001eacd20) Stream removed, broadcasting: 3 I0226 00:18:54.968730 9 log.go:172] (0xc002e32630) (0xc001eacdc0) Stream removed, broadcasting: 5 I0226 00:18:54.968852 9 log.go:172] (0xc002e32630) Go away received Feb 26 00:18:54.969: INFO: Waiting for responses: map[] Feb 26 00:18:54.974: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.32.0.5&port=8080&tries=1'] Namespace:pod-network-test-1895 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 26 00:18:54.974: INFO: >>> kubeConfig: /root/.kube/config I0226 00:18:55.030795 9 log.go:172] (0xc0027c2370) (0xc001ead180) Create stream I0226 00:18:55.031001 9 log.go:172] (0xc0027c2370) (0xc001ead180) Stream added, broadcasting: 1 I0226 00:18:55.035742 9 log.go:172] (0xc0027c2370) Reply frame received for 1 I0226 00:18:55.035978 9 log.go:172] (0xc0027c2370) (0xc00249b2c0) Create stream I0226 00:18:55.036010 9 log.go:172] (0xc0027c2370) (0xc00249b2c0) Stream added, broadcasting: 3 I0226 00:18:55.038352 9 log.go:172] (0xc0027c2370) Reply frame received for 3 I0226 00:18:55.038409 9 log.go:172] (0xc0027c2370) (0xc001ead400) Create stream I0226 00:18:55.038439 9 log.go:172] (0xc0027c2370) (0xc001ead400) Stream added, broadcasting: 5 I0226 00:18:55.041415 9 log.go:172] (0xc0027c2370) Reply frame received for 5 I0226 00:18:55.132566 9 log.go:172] (0xc0027c2370) Data frame received for 3 I0226 00:18:55.132626 9 log.go:172] (0xc00249b2c0) (3) Data frame handling I0226 00:18:55.132660 9 log.go:172] (0xc00249b2c0) (3) Data frame sent I0226 00:18:55.207384 9 log.go:172] (0xc0027c2370) (0xc00249b2c0) Stream removed, broadcasting: 3 I0226 00:18:55.207656 9 log.go:172] (0xc0027c2370) Data frame received for 1 I0226 00:18:55.207677 9 log.go:172] (0xc001ead180) (1) Data frame handling I0226 00:18:55.207723 9 log.go:172] (0xc001ead180) (1) Data frame sent I0226 00:18:55.207738 9 log.go:172] (0xc0027c2370) (0xc001ead180) Stream removed, broadcasting: 1 I0226 00:18:55.208178 9 log.go:172] (0xc0027c2370) (0xc001ead400) Stream removed, broadcasting: 5 I0226 00:18:55.208291 9 log.go:172] (0xc0027c2370) (0xc001ead180) Stream removed, broadcasting: 1 I0226 00:18:55.208306 9 log.go:172] (0xc0027c2370) (0xc00249b2c0) Stream removed, broadcasting: 3 I0226 00:18:55.208325 9 log.go:172] (0xc0027c2370) (0xc001ead400) Stream removed, broadcasting: 5 I0226 00:18:55.208717 9 log.go:172] (0xc0027c2370) Go away received Feb 26 00:18:55.209: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:18:55.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1895" for this suite. • [SLOW TEST:36.950 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":83,"skipped":1385,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:18:55.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1806 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Feb 26 00:18:55.447: INFO: Found 0 stateful pods, waiting for 3 Feb 26 00:19:05.479: INFO: Found 1 stateful pods, waiting for 3 Feb 26 00:19:15.462: INFO: Found 2 stateful pods, waiting for 3 Feb 26 00:19:25.454: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 26 00:19:25.454: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 26 00:19:25.454: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 26 00:19:25.488: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 26 00:19:35.536: INFO: Updating stateful set ss2 Feb 26 00:19:35.592: INFO: Waiting for Pod statefulset-1806/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Feb 26 00:19:46.933: INFO: Found 2 stateful pods, waiting for 3 Feb 26 00:19:56.942: INFO: Found 2 stateful pods, waiting for 3 Feb 26 00:20:06.942: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 26 00:20:06.942: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 26 00:20:06.942: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 26 00:20:06.971: INFO: Updating stateful set ss2 Feb 26 00:20:07.071: INFO: Waiting for Pod statefulset-1806/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 26 00:20:17.113: INFO: Updating stateful set ss2 Feb 26 00:20:17.241: INFO: Waiting for StatefulSet statefulset-1806/ss2 to complete update Feb 26 00:20:17.241: INFO: Waiting for Pod statefulset-1806/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 26 00:20:27.252: INFO: Waiting for StatefulSet statefulset-1806/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 26 00:20:37.257: INFO: Deleting all statefulset in ns statefulset-1806 Feb 26 00:20:37.261: INFO: Scaling statefulset ss2 to 0 Feb 26 00:21:07.292: INFO: Waiting for statefulset status.replicas updated to 0 Feb 26 00:21:07.299: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:21:07.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1806" for this suite. • [SLOW TEST:132.103 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":84,"skipped":1391,"failed":0} [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:21:07.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 26 00:21:07.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101" in namespace "downward-api-3063" to be "success or failure" Feb 26 00:21:07.487: INFO: Pod "downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101": Phase="Pending", Reason="", readiness=false. Elapsed: 30.265573ms Feb 26 00:21:09.496: INFO: Pod "downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039407433s Feb 26 00:21:11.504: INFO: Pod "downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047331484s Feb 26 00:21:13.519: INFO: Pod "downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062610389s Feb 26 00:21:15.524: INFO: Pod "downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067288808s STEP: Saw pod success Feb 26 00:21:15.524: INFO: Pod "downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101" satisfied condition "success or failure" Feb 26 00:21:15.527: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101 container client-container: STEP: delete the pod Feb 26 00:21:15.582: INFO: Waiting for pod downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101 to disappear Feb 26 00:21:15.588: INFO: Pod downwardapi-volume-c49302f2-4493-4eb6-92e0-0b1f94298101 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:21:15.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3063" for this suite. • [SLOW TEST:8.270 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":85,"skipped":1391,"failed":0} SS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:21:15.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:21:15.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8519" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":86,"skipped":1393,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:21:15.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-576a6184-3ea0-4256-8b16-082eb8362a68 STEP: Creating a pod to test consume secrets Feb 26 00:21:17.382: INFO: Waiting up to 5m0s for pod "pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193" in namespace "secrets-6203" to be "success or failure" Feb 26 00:21:17.449: INFO: Pod "pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193": Phase="Pending", Reason="", readiness=false. Elapsed: 66.57375ms Feb 26 00:21:19.456: INFO: Pod "pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073988117s Feb 26 00:21:21.475: INFO: Pod "pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092346183s Feb 26 00:21:23.622: INFO: Pod "pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239217861s Feb 26 00:21:25.627: INFO: Pod "pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.244217288s STEP: Saw pod success Feb 26 00:21:25.627: INFO: Pod "pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193" satisfied condition "success or failure" Feb 26 00:21:25.629: INFO: Trying to get logs from node jerma-node pod pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193 container secret-volume-test: STEP: delete the pod Feb 26 00:21:25.762: INFO: Waiting for pod pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193 to disappear Feb 26 00:21:25.771: INFO: Pod pod-secrets-1b1b6bd6-34b3-4e99-8968-af455c2f1193 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:21:25.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6203" for this suite. • [SLOW TEST:9.915 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":87,"skipped":1409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:21:25.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 26 00:21:25.853: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 26 00:21:25.944: INFO: Waiting for terminating namespaces to be deleted... Feb 26 00:21:25.947: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 26 00:21:25.952: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 26 00:21:25.952: INFO: Container weave ready: true, restart count 1 Feb 26 00:21:25.952: INFO: Container weave-npc ready: true, restart count 0 Feb 26 00:21:25.952: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 26 00:21:25.952: INFO: Container kube-proxy ready: true, restart count 0 Feb 26 00:21:25.952: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 26 00:21:25.972: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 26 00:21:25.972: INFO: Container kube-controller-manager ready: true, restart count 19 Feb 26 00:21:25.972: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 26 00:21:25.972: INFO: Container kube-proxy ready: true, restart count 0 Feb 26 00:21:25.972: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 26 00:21:25.972: INFO: Container weave ready: true, restart count 0 Feb 26 00:21:25.972: INFO: Container weave-npc ready: true, restart count 0 Feb 26 00:21:25.972: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 26 00:21:25.972: INFO: Container kube-scheduler ready: true, restart count 25 Feb 26 00:21:25.972: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 26 00:21:25.972: INFO: Container kube-apiserver ready: true, restart count 1 Feb 26 00:21:25.972: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 26 00:21:25.972: INFO: Container etcd ready: true, restart count 1 Feb 26 00:21:25.972: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 26 00:21:25.972: INFO: Container coredns ready: true, restart count 0 Feb 26 00:21:25.972: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 26 00:21:25.972: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f6cbe0edc1467b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f6cbe0f75d1702], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:21:26.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6032" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":280,"completed":88,"skipped":1451,"failed":0} ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:21:27.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-0d1548ed-02c1-41c9-b6b9-615985a4bf53 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0d1548ed-02c1-41c9-b6b9-615985a4bf53 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:21:35.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2911" for this suite. • [SLOW TEST:8.406 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":89,"skipped":1451,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:21:35.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:21:36.301: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:21:38.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:21:40.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:21:42.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:21:44.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273296, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:21:47.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:21:47.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:21:49.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-483" for this suite. STEP: Destroying namespace "webhook-483-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.304 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":90,"skipped":1473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:21:49.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:21:50.561: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:21:52.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:21:54.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:21:56.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:21:58.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273310, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:22:01.689: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:22:01.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1020-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:22:02.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8041" for this suite. STEP: Destroying namespace "webhook-8041-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.347 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":91,"skipped":1505,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:22:03.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:22:04.094: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:22:06.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273323, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:22:08.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273323, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:22:10.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273323, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:22:12.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273324, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273323, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:22:15.169: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:22:15.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1297-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:22:16.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7816" for this suite. STEP: Destroying namespace "webhook-7816-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.804 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":92,"skipped":1523,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:22:16.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 26 00:22:35.091: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 26 00:22:35.095: INFO: Pod pod-with-prestop-http-hook still exists Feb 26 00:22:37.096: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 26 00:22:37.111: INFO: Pod pod-with-prestop-http-hook still exists Feb 26 00:22:39.096: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 26 00:22:39.101: INFO: Pod pod-with-prestop-http-hook still exists Feb 26 00:22:41.096: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 26 00:22:41.101: INFO: Pod pod-with-prestop-http-hook still exists Feb 26 00:22:43.096: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 26 00:22:43.102: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:22:43.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5538" for this suite. • [SLOW TEST:26.247 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":93,"skipped":1538,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:22:43.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Feb 26 00:22:43.234: INFO: namespace kubectl-1777 Feb 26 00:22:43.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1777' Feb 26 00:22:43.665: INFO: stderr: "" Feb 26 00:22:43.665: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 26 00:22:44.679: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:44.679: INFO: Found 0 / 1 Feb 26 00:22:45.678: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:45.679: INFO: Found 0 / 1 Feb 26 00:22:46.671: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:46.671: INFO: Found 0 / 1 Feb 26 00:22:47.673: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:47.674: INFO: Found 0 / 1 Feb 26 00:22:48.675: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:48.675: INFO: Found 0 / 1 Feb 26 00:22:49.674: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:49.675: INFO: Found 0 / 1 Feb 26 00:22:50.680: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:50.680: INFO: Found 0 / 1 Feb 26 00:22:51.674: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:51.674: INFO: Found 0 / 1 Feb 26 00:22:52.695: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:52.695: INFO: Found 0 / 1 Feb 26 00:22:53.698: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:53.698: INFO: Found 1 / 1 Feb 26 00:22:53.698: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 26 00:22:53.707: INFO: Selector matched 1 pods for map[app:agnhost] Feb 26 00:22:53.707: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 26 00:22:53.707: INFO: wait on agnhost-master startup in kubectl-1777 Feb 26 00:22:53.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-t72wt agnhost-master --namespace=kubectl-1777' Feb 26 00:22:53.871: INFO: stderr: "" Feb 26 00:22:53.871: INFO: stdout: "Paused\n" STEP: exposing RC Feb 26 00:22:53.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1777' Feb 26 00:22:54.034: INFO: stderr: "" Feb 26 00:22:54.034: INFO: stdout: "service/rm2 exposed\n" Feb 26 00:22:54.045: INFO: Service rm2 in namespace kubectl-1777 found. STEP: exposing service Feb 26 00:22:56.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1777' Feb 26 00:22:56.679: INFO: stderr: "" Feb 26 00:22:56.680: INFO: stdout: "service/rm3 exposed\n" Feb 26 00:22:56.688: INFO: Service rm3 in namespace kubectl-1777 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:22:58.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1777" for this suite. • [SLOW TEST:15.596 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":280,"completed":94,"skipped":1539,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:22:58.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 26 00:22:58.819: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 26 00:22:58.877: INFO: Waiting for terminating namespaces to be deleted... Feb 26 00:22:58.916: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 26 00:22:58.925: INFO: pod-handle-http-request from container-lifecycle-hook-5538 started at 2020-02-26 00:22:17 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.925: INFO: Container pod-handle-http-request ready: false, restart count 0 Feb 26 00:22:58.925: INFO: agnhost-master-t72wt from kubectl-1777 started at 2020-02-26 00:22:43 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.925: INFO: Container agnhost-master ready: true, restart count 0 Feb 26 00:22:58.925: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.925: INFO: Container kube-proxy ready: true, restart count 0 Feb 26 00:22:58.925: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 26 00:22:58.925: INFO: Container weave ready: true, restart count 1 Feb 26 00:22:58.925: INFO: Container weave-npc ready: true, restart count 0 Feb 26 00:22:58.925: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 26 00:22:58.939: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.939: INFO: Container coredns ready: true, restart count 0 Feb 26 00:22:58.939: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.939: INFO: Container coredns ready: true, restart count 0 Feb 26 00:22:58.939: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.939: INFO: Container kube-controller-manager ready: true, restart count 19 Feb 26 00:22:58.939: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.939: INFO: Container kube-proxy ready: true, restart count 0 Feb 26 00:22:58.939: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 26 00:22:58.939: INFO: Container weave ready: true, restart count 0 Feb 26 00:22:58.939: INFO: Container weave-npc ready: true, restart count 0 Feb 26 00:22:58.939: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.939: INFO: Container kube-scheduler ready: true, restart count 25 Feb 26 00:22:58.939: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.939: INFO: Container kube-apiserver ready: true, restart count 1 Feb 26 00:22:58.939: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 26 00:22:58.939: INFO: Container etcd ready: true, restart count 1 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-95746872-bf8e-472b-99aa-d2d87553e8c0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-95746872-bf8e-472b-99aa-d2d87553e8c0 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-95746872-bf8e-472b-99aa-d2d87553e8c0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:23:17.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9382" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:18.554 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":280,"completed":95,"skipped":1554,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:23:17.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:23:17.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1660" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":96,"skipped":1576,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:23:17.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:23:17.900: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:23:19.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:22.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:23.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:25.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:28.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:29.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273397, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:23:32.977: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:23:33.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5991" for this suite. STEP: Destroying namespace "webhook-5991-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.965 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":97,"skipped":1578,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:23:33.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 26 00:23:34.029: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 26 00:23:36.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:38.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:40.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:42.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 26 00:23:44.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718273414, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 26 00:23:47.070: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:23:47.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4096" for this suite. STEP: Destroying namespace "webhook-4096-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.005 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":98,"skipped":1584,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:23:47.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 26 00:23:47.556: INFO: Waiting up to 5m0s for pod "pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163" in namespace "emptydir-4501" to be "success or failure" Feb 26 00:23:47.625: INFO: Pod "pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163": Phase="Pending", Reason="", readiness=false. Elapsed: 69.539459ms Feb 26 00:23:49.636: INFO: Pod "pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079997861s Feb 26 00:23:51.654: INFO: Pod "pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098324159s Feb 26 00:23:54.529: INFO: Pod "pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163": Phase="Pending", Reason="", readiness=false. Elapsed: 6.973121905s Feb 26 00:23:56.541: INFO: Pod "pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163": Phase="Pending", Reason="", readiness=false. Elapsed: 8.984713268s Feb 26 00:23:58.550: INFO: Pod "pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.994538825s STEP: Saw pod success Feb 26 00:23:58.551: INFO: Pod "pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163" satisfied condition "success or failure" Feb 26 00:23:58.553: INFO: Trying to get logs from node jerma-node pod pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163 container test-container: STEP: delete the pod Feb 26 00:23:58.625: INFO: Waiting for pod pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163 to disappear Feb 26 00:23:58.630: INFO: Pod pod-f36d3a93-52c0-4e71-9bf0-fa56644fb163 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:23:58.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4501" for this suite. • [SLOW TEST:11.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":99,"skipped":1597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:23:58.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 26 00:23:59.164: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 26 00:24:04.481: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:24:04.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7868" for this suite. • [SLOW TEST:6.138 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":100,"skipped":1620,"failed":0} S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:24:04.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-63f80d0b-502b-424c-894c-e86d0f4e93ef STEP: Creating secret with name secret-projected-all-test-volume-5529c9a0-a9a4-4389-9c84-38a1b1cae43a STEP: Creating a pod to test Check all projections for projected volume plugin Feb 26 00:24:05.068: INFO: Waiting up to 5m0s for pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7" in namespace "projected-1777" to be "success or failure" Feb 26 00:24:05.120: INFO: Pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7": Phase="Pending", Reason="", readiness=false. Elapsed: 51.721806ms Feb 26 00:24:07.126: INFO: Pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057533489s Feb 26 00:24:09.133: INFO: Pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065204875s Feb 26 00:24:11.140: INFO: Pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072089662s Feb 26 00:24:13.150: INFO: Pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082051325s Feb 26 00:24:15.155: INFO: Pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.086659106s Feb 26 00:24:17.162: INFO: Pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.094125083s STEP: Saw pod success Feb 26 00:24:17.163: INFO: Pod "projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7" satisfied condition "success or failure" Feb 26 00:24:17.166: INFO: Trying to get logs from node jerma-node pod projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7 container projected-all-volume-test: STEP: delete the pod Feb 26 00:24:17.204: INFO: Waiting for pod projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7 to disappear Feb 26 00:24:17.209: INFO: Pod projected-volume-7e7e3446-255d-40af-ade5-bcc5cfd1dad7 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:24:17.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1777" for this suite. • [SLOW TEST:12.484 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":101,"skipped":1621,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:24:17.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 26 00:24:17.439: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:24:32.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7462" for this suite. • [SLOW TEST:14.891 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":102,"skipped":1649,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:24:32.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:24:32.260: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:24:32.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7061" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":280,"completed":103,"skipped":1653,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:24:32.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:24:33.030: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:24:34.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4374" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":280,"completed":104,"skipped":1660,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:24:34.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 26 00:24:34.792: INFO: Waiting up to 5m0s for pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4" in namespace "downward-api-336" to be "success or failure" Feb 26 00:24:34.806: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.79808ms Feb 26 00:24:36.820: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027987153s Feb 26 00:24:38.830: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037921629s Feb 26 00:24:40.837: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045616206s Feb 26 00:24:42.845: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053280187s Feb 26 00:24:44.860: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067847704s Feb 26 00:24:46.902: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.109795046s Feb 26 00:24:48.915: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.123048097s STEP: Saw pod success Feb 26 00:24:48.915: INFO: Pod "downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4" satisfied condition "success or failure" Feb 26 00:24:48.927: INFO: Trying to get logs from node jerma-node pod downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4 container dapi-container: STEP: delete the pod Feb 26 00:24:49.018: INFO: Waiting for pod downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4 to disappear Feb 26 00:24:49.030: INFO: Pod downward-api-08a2dbd9-8f9c-4ca7-ba27-60197936d7c4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:24:49.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-336" for this suite. • [SLOW TEST:14.350 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":105,"skipped":1668,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:24:49.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 26 00:24:49.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4062' Feb 26 00:24:51.185: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 26 00:24:51.185: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Feb 26 00:24:51.265: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 26 00:24:51.275: INFO: scanned /root for discovery docs: Feb 26 00:24:51.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4062' Feb 26 00:25:14.808: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 26 00:25:14.808: INFO: stdout: "Created e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def\nScaling up e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Feb 26 00:25:14.808: INFO: stdout: "Created e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def\nScaling up e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Feb 26 00:25:14.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4062' Feb 26 00:25:14.932: INFO: stderr: "" Feb 26 00:25:14.932: INFO: stdout: "e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def-vdzh4 " Feb 26 00:25:14.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def-vdzh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4062' Feb 26 00:25:15.020: INFO: stderr: "" Feb 26 00:25:15.020: INFO: stdout: "true" Feb 26 00:25:15.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def-vdzh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4062' Feb 26 00:25:15.111: INFO: stderr: "" Feb 26 00:25:15.111: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Feb 26 00:25:15.111: INFO: e2e-test-httpd-rc-3908335c660aa59eaad4a1551ce41def-vdzh4 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 Feb 26 00:25:15.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4062' Feb 26 00:25:15.214: INFO: stderr: "" Feb 26 00:25:15.214: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:25:15.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4062" for this suite. • [SLOW TEST:26.250 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":280,"completed":106,"skipped":1673,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:25:15.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 26 00:25:15.554: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3019 /api/v1/namespaces/watch-3019/configmaps/e2e-watch-test-watch-closed 3893543b-8c9e-4093-9c64-7ec00edb6cfd 10765578 0 2020-02-26 00:25:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 26 00:25:15.554: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3019 /api/v1/namespaces/watch-3019/configmaps/e2e-watch-test-watch-closed 3893543b-8c9e-4093-9c64-7ec00edb6cfd 10765579 0 2020-02-26 00:25:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 26 00:25:15.607: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3019 /api/v1/namespaces/watch-3019/configmaps/e2e-watch-test-watch-closed 3893543b-8c9e-4093-9c64-7ec00edb6cfd 10765581 0 2020-02-26 00:25:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 26 00:25:15.608: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3019 /api/v1/namespaces/watch-3019/configmaps/e2e-watch-test-watch-closed 3893543b-8c9e-4093-9c64-7ec00edb6cfd 10765583 0 2020-02-26 00:25:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:25:15.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3019" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":107,"skipped":1674,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:25:15.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Feb 26 00:25:16.004: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Feb 26 00:25:16.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4321' Feb 26 00:25:16.462: INFO: stderr: "" Feb 26 00:25:16.462: INFO: stdout: "service/agnhost-slave created\n" Feb 26 00:25:16.465: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Feb 26 00:25:16.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4321' Feb 26 00:25:16.825: INFO: stderr: "" Feb 26 00:25:16.825: INFO: stdout: "service/agnhost-master created\n" Feb 26 00:25:16.826: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 26 00:25:16.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4321' Feb 26 00:25:17.188: INFO: stderr: "" Feb 26 00:25:17.188: INFO: stdout: "service/frontend created\n" Feb 26 00:25:17.189: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 26 00:25:17.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4321' Feb 26 00:25:17.495: INFO: stderr: "" Feb 26 00:25:17.495: INFO: stdout: "deployment.apps/frontend created\n" Feb 26 00:25:17.496: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 26 00:25:17.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4321' Feb 26 00:25:17.866: INFO: stderr: "" Feb 26 00:25:17.867: INFO: stdout: "deployment.apps/agnhost-master created\n" Feb 26 00:25:17.867: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 26 00:25:17.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4321' Feb 26 00:25:18.646: INFO: stderr: "" Feb 26 00:25:18.646: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Feb 26 00:25:18.646: INFO: Waiting for all frontend pods to be Running. Feb 26 00:25:43.699: INFO: Waiting for frontend to serve content. Feb 26 00:25:43.731: INFO: Trying to add a new entry to the guestbook. Feb 26 00:25:43.763: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:25:48.779: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:25:53.804: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:25:58.834: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:03.864: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:08.899: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:13.925: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:18.960: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:23.981: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:29.000: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:34.019: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:39.036: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:44.052: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:49.067: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:54.089: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:26:59.129: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:04.165: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:09.187: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:14.207: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:19.226: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:24.259: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:29.281: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:34.295: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:39.316: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:44.353: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:49.372: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:54.382: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:27:59.399: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:04.418: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:09.445: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:14.466: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:19.500: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:24.520: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:29.551: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:34.594: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:39.616: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Feb 26 00:28:44.617: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x551f740, 0xc0061dc000, 0xc005364e10, 0xc) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 +0x551 k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:420 +0x165 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0001ce800) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc0001ce800) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b testing.tRunner(0xc0001ce800, 0x4c9f938) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 STEP: using delete to clean up resources Feb 26 00:28:44.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4321' Feb 26 00:28:44.984: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 00:28:44.984: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Feb 26 00:28:44.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4321' Feb 26 00:28:45.175: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 00:28:45.175: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 26 00:28:45.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4321' Feb 26 00:28:45.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 00:28:45.336: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 26 00:28:45.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4321' Feb 26 00:28:45.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 00:28:45.456: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 26 00:28:45.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4321' Feb 26 00:28:45.563: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 00:28:45.564: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Feb 26 00:28:45.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4321' Feb 26 00:28:45.700: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 00:28:45.700: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-4321". STEP: Found 33 events. Feb 26 00:28:45.707: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-jdxbm: {default-scheduler } Scheduled: Successfully assigned kubectl-4321/agnhost-master-74c46fb7d4-jdxbm to jerma-node Feb 26 00:28:45.708: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-hwhnd: {default-scheduler } Scheduled: Successfully assigned kubectl-4321/agnhost-slave-774cfc759f-hwhnd to jerma-server-mvvl6gufaqub Feb 26 00:28:45.708: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-rlxgv: {default-scheduler } Scheduled: Successfully assigned kubectl-4321/agnhost-slave-774cfc759f-rlxgv to jerma-node Feb 26 00:28:45.708: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-4qzms: {default-scheduler } Scheduled: Successfully assigned kubectl-4321/frontend-6c5f89d5d4-4qzms to jerma-server-mvvl6gufaqub Feb 26 00:28:45.708: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-7xhkv: {default-scheduler } Scheduled: Successfully assigned kubectl-4321/frontend-6c5f89d5d4-7xhkv to jerma-node Feb 26 00:28:45.708: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-hcxxb: {default-scheduler } Scheduled: Successfully assigned kubectl-4321/frontend-6c5f89d5d4-hcxxb to jerma-node Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:17 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1 Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:17 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3 Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:17 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-4qzms Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:17 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-hcxxb Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:17 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-7xhkv Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:18 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-jdxbm Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:19 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2 Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:19 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-rlxgv Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:19 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-hwhnd Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:25 +0000 UTC - event for frontend-6c5f89d5d4-4qzms: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:26 +0000 UTC - event for frontend-6c5f89d5d4-hcxxb: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:27 +0000 UTC - event for agnhost-slave-774cfc759f-hwhnd: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:30 +0000 UTC - event for frontend-6c5f89d5d4-7xhkv: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:31 +0000 UTC - event for agnhost-master-74c46fb7d4-jdxbm: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:32 +0000 UTC - event for agnhost-slave-774cfc759f-hwhnd: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:32 +0000 UTC - event for frontend-6c5f89d5d4-4qzms: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:33 +0000 UTC - event for agnhost-slave-774cfc759f-hwhnd: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:33 +0000 UTC - event for frontend-6c5f89d5d4-4qzms: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:35 +0000 UTC - event for agnhost-slave-774cfc759f-rlxgv: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:35 +0000 UTC - event for frontend-6c5f89d5d4-hcxxb: {kubelet jerma-node} Created: Created container guestbook-frontend Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:37 +0000 UTC - event for agnhost-master-74c46fb7d4-jdxbm: {kubelet jerma-node} Created: Created container master Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:37 +0000 UTC - event for frontend-6c5f89d5d4-7xhkv: {kubelet jerma-node} Created: Created container guestbook-frontend Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:38 +0000 UTC - event for agnhost-master-74c46fb7d4-jdxbm: {kubelet jerma-node} Started: Started container master Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:38 +0000 UTC - event for agnhost-slave-774cfc759f-rlxgv: {kubelet jerma-node} Created: Created container slave Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:38 +0000 UTC - event for agnhost-slave-774cfc759f-rlxgv: {kubelet jerma-node} Started: Started container slave Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:38 +0000 UTC - event for frontend-6c5f89d5d4-7xhkv: {kubelet jerma-node} Started: Started container guestbook-frontend Feb 26 00:28:45.708: INFO: At 2020-02-26 00:25:38 +0000 UTC - event for frontend-6c5f89d5d4-hcxxb: {kubelet jerma-node} Started: Started container guestbook-frontend Feb 26 00:28:45.717: INFO: POD NODE PHASE GRACE CONDITIONS Feb 26 00:28:45.717: INFO: agnhost-master-74c46fb7d4-jdxbm jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:18 +0000 UTC }] Feb 26 00:28:45.718: INFO: agnhost-slave-774cfc759f-hwhnd jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:19 +0000 UTC }] Feb 26 00:28:45.718: INFO: agnhost-slave-774cfc759f-rlxgv jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:19 +0000 UTC }] Feb 26 00:28:45.718: INFO: frontend-6c5f89d5d4-4qzms jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:17 +0000 UTC }] Feb 26 00:28:45.718: INFO: frontend-6c5f89d5d4-7xhkv jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:17 +0000 UTC }] Feb 26 00:28:45.718: INFO: frontend-6c5f89d5d4-hcxxb jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:25:17 +0000 UTC }] Feb 26 00:28:45.718: INFO: Feb 26 00:28:45.809: INFO: Logging node info for node jerma-node Feb 26 00:28:45.868: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 10766025 0 2020-01-04 11:59:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-26 00:27:36 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-26 00:27:36 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-26 00:27:36 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-26 00:27:36 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 26 00:28:45.873: INFO: Logging kubelet events for node jerma-node Feb 26 00:28:45.951: INFO: Logging pods the kubelet thinks is on node jerma-node Feb 26 00:28:46.037: INFO: frontend-6c5f89d5d4-7xhkv started at 2020-02-26 00:25:17 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.038: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 26 00:28:46.038: INFO: agnhost-master-74c46fb7d4-jdxbm started at 2020-02-26 00:25:20 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.038: INFO: Container master ready: true, restart count 0 Feb 26 00:28:46.038: INFO: agnhost-slave-774cfc759f-rlxgv started at 2020-02-26 00:25:21 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.038: INFO: Container slave ready: true, restart count 0 Feb 26 00:28:46.038: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.038: INFO: Container kube-proxy ready: true, restart count 0 Feb 26 00:28:46.038: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded) Feb 26 00:28:46.038: INFO: Container weave ready: true, restart count 1 Feb 26 00:28:46.038: INFO: Container weave-npc ready: true, restart count 0 Feb 26 00:28:46.038: INFO: frontend-6c5f89d5d4-hcxxb started at 2020-02-26 00:25:17 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.038: INFO: Container guestbook-frontend ready: true, restart count 0 W0226 00:28:46.081006 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 26 00:28:46.218: INFO: Latency metrics for node jerma-node Feb 26 00:28:46.218: INFO: Logging node info for node jerma-server-mvvl6gufaqub Feb 26 00:28:46.230: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 10766138 0 2020-01-04 11:47:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-26 00:28:26 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-26 00:28:26 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-26 00:28:26 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-26 00:28:26 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 26 00:28:46.232: INFO: Logging kubelet events for node jerma-server-mvvl6gufaqub Feb 26 00:28:46.239: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub Feb 26 00:28:46.277: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container kube-scheduler ready: true, restart count 25 Feb 26 00:28:46.278: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container kube-apiserver ready: true, restart count 1 Feb 26 00:28:46.278: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container etcd ready: true, restart count 1 Feb 26 00:28:46.278: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container coredns ready: true, restart count 0 Feb 26 00:28:46.278: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container coredns ready: true, restart count 0 Feb 26 00:28:46.278: INFO: frontend-6c5f89d5d4-4qzms started at 2020-02-26 00:25:17 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container guestbook-frontend ready: true, restart count 0 Feb 26 00:28:46.278: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container kube-controller-manager ready: true, restart count 19 Feb 26 00:28:46.278: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container kube-proxy ready: true, restart count 0 Feb 26 00:28:46.278: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded) Feb 26 00:28:46.278: INFO: Container weave ready: true, restart count 0 Feb 26 00:28:46.278: INFO: Container weave-npc ready: true, restart count 0 Feb 26 00:28:46.278: INFO: agnhost-slave-774cfc759f-hwhnd started at 2020-02-26 00:25:19 +0000 UTC (0+1 container statuses recorded) Feb 26 00:28:46.278: INFO: Container slave ready: true, restart count 0 W0226 00:28:46.290179 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 26 00:28:46.345: INFO: Latency metrics for node jerma-server-mvvl6gufaqub Feb 26 00:28:46.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4321" for this suite. • Failure [210.557 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:28:44.617: Cannot added new entry in 180 seconds. /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":280,"completed":107,"skipped":1686,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:28:46.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-1804/configmap-test-9c7b7f10-b40e-48fa-9c73-7183c2c6f1cf STEP: Creating a pod to test consume configMaps Feb 26 00:28:47.734: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78" in namespace "configmap-1804" to be "success or failure" Feb 26 00:28:47.877: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Pending", Reason="", readiness=false. Elapsed: 142.990896ms Feb 26 00:28:50.185: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451428285s Feb 26 00:28:52.489: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.755605714s Feb 26 00:28:54.540: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.806585107s Feb 26 00:28:56.554: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Pending", Reason="", readiness=false. Elapsed: 8.820305391s Feb 26 00:28:58.565: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Pending", Reason="", readiness=false. Elapsed: 10.831611209s Feb 26 00:29:00.575: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Pending", Reason="", readiness=false. Elapsed: 12.841339751s Feb 26 00:29:02.583: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Pending", Reason="", readiness=false. Elapsed: 14.849267181s Feb 26 00:29:04.598: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.864561949s STEP: Saw pod success Feb 26 00:29:04.599: INFO: Pod "pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78" satisfied condition "success or failure" Feb 26 00:29:04.628: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78 container env-test: STEP: delete the pod Feb 26 00:29:04.703: INFO: Waiting for pod pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78 to disappear Feb 26 00:29:04.715: INFO: Pod pod-configmaps-4f096cec-72e8-42de-993f-a127c926ed78 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:29:04.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1804" for this suite. • [SLOW TEST:18.395 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":108,"skipped":1690,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:29:04.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5555 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5555 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5555 Feb 26 00:29:04.922: INFO: Found 0 stateful pods, waiting for 1 Feb 26 00:29:14.929: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 26 00:29:14.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5555 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 26 00:29:15.255: INFO: stderr: "I0226 00:29:15.090043 1551 log.go:172] (0xc0000f4c60) (0xc0006c5ea0) Create stream\nI0226 00:29:15.090317 1551 log.go:172] (0xc0000f4c60) (0xc0006c5ea0) Stream added, broadcasting: 1\nI0226 00:29:15.094592 1551 log.go:172] (0xc0000f4c60) Reply frame received for 1\nI0226 00:29:15.094630 1551 log.go:172] (0xc0000f4c60) (0xc000581360) Create stream\nI0226 00:29:15.094639 1551 log.go:172] (0xc0000f4c60) (0xc000581360) Stream added, broadcasting: 3\nI0226 00:29:15.095936 1551 log.go:172] (0xc0000f4c60) Reply frame received for 3\nI0226 00:29:15.095963 1551 log.go:172] (0xc0000f4c60) (0xc00077a0a0) Create stream\nI0226 00:29:15.095972 1551 log.go:172] (0xc0000f4c60) (0xc00077a0a0) Stream added, broadcasting: 5\nI0226 00:29:15.097826 1551 log.go:172] (0xc0000f4c60) Reply frame received for 5\nI0226 00:29:15.165581 1551 log.go:172] (0xc0000f4c60) Data frame received for 5\nI0226 00:29:15.165632 1551 log.go:172] (0xc00077a0a0) (5) Data frame handling\nI0226 00:29:15.165653 1551 log.go:172] (0xc00077a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 00:29:15.188361 1551 log.go:172] (0xc0000f4c60) Data frame received for 3\nI0226 00:29:15.188384 1551 log.go:172] (0xc000581360) (3) Data frame handling\nI0226 00:29:15.188397 1551 log.go:172] (0xc000581360) (3) Data frame sent\nI0226 00:29:15.244737 1551 log.go:172] (0xc0000f4c60) Data frame received for 1\nI0226 00:29:15.244801 1551 log.go:172] (0xc0006c5ea0) (1) Data frame handling\nI0226 00:29:15.244814 1551 log.go:172] (0xc0006c5ea0) (1) Data frame sent\nI0226 00:29:15.244971 1551 log.go:172] (0xc0000f4c60) (0xc000581360) Stream removed, broadcasting: 3\nI0226 00:29:15.245002 1551 log.go:172] (0xc0000f4c60) (0xc0006c5ea0) Stream removed, broadcasting: 1\nI0226 00:29:15.245402 1551 log.go:172] (0xc0000f4c60) (0xc00077a0a0) Stream removed, broadcasting: 5\nI0226 00:29:15.245440 1551 log.go:172] (0xc0000f4c60) Go away received\nI0226 00:29:15.245532 1551 log.go:172] (0xc0000f4c60) (0xc0006c5ea0) Stream removed, broadcasting: 1\nI0226 00:29:15.245541 1551 log.go:172] (0xc0000f4c60) (0xc000581360) Stream removed, broadcasting: 3\nI0226 00:29:15.245545 1551 log.go:172] (0xc0000f4c60) (0xc00077a0a0) Stream removed, broadcasting: 5\n" Feb 26 00:29:15.255: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 26 00:29:15.255: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 26 00:29:15.262: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 26 00:29:25.271: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 26 00:29:25.271: INFO: Waiting for statefulset status.replicas updated to 0 Feb 26 00:29:25.297: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999997925s Feb 26 00:29:26.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988168785s Feb 26 00:29:27.314: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978240772s Feb 26 00:29:28.322: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970809391s Feb 26 00:29:29.333: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.963045157s Feb 26 00:29:30.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.952167787s Feb 26 00:29:31.350: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.945232274s Feb 26 00:29:32.356: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.935185135s Feb 26 00:29:33.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.929260791s Feb 26 00:29:34.480: INFO: Verifying statefulset ss doesn't scale past 1 for another 921.373714ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5555 Feb 26 00:29:35.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 26 00:29:35.899: INFO: stderr: "I0226 00:29:35.698303 1571 log.go:172] (0xc000a5e420) (0xc000a3c280) Create stream\nI0226 00:29:35.698476 1571 log.go:172] (0xc000a5e420) (0xc000a3c280) Stream added, broadcasting: 1\nI0226 00:29:35.701508 1571 log.go:172] (0xc000a5e420) Reply frame received for 1\nI0226 00:29:35.701549 1571 log.go:172] (0xc000a5e420) (0xc000b30c80) Create stream\nI0226 00:29:35.701558 1571 log.go:172] (0xc000a5e420) (0xc000b30c80) Stream added, broadcasting: 3\nI0226 00:29:35.704373 1571 log.go:172] (0xc000a5e420) Reply frame received for 3\nI0226 00:29:35.704472 1571 log.go:172] (0xc000a5e420) (0xc000a3c320) Create stream\nI0226 00:29:35.704500 1571 log.go:172] (0xc000a5e420) (0xc000a3c320) Stream added, broadcasting: 5\nI0226 00:29:35.707736 1571 log.go:172] (0xc000a5e420) Reply frame received for 5\nI0226 00:29:35.794599 1571 log.go:172] (0xc000a5e420) Data frame received for 5\nI0226 00:29:35.796238 1571 log.go:172] (0xc000a3c320) (5) Data frame handling\nI0226 00:29:35.796341 1571 log.go:172] (0xc000a3c320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0226 00:29:35.799390 1571 log.go:172] (0xc000a5e420) Data frame received for 3\nI0226 00:29:35.799491 1571 log.go:172] (0xc000b30c80) (3) Data frame handling\nI0226 00:29:35.799578 1571 log.go:172] (0xc000b30c80) (3) Data frame sent\nI0226 00:29:35.884058 1571 log.go:172] (0xc000a5e420) Data frame received for 1\nI0226 00:29:35.884211 1571 log.go:172] (0xc000a3c280) (1) Data frame handling\nI0226 00:29:35.884243 1571 log.go:172] (0xc000a3c280) (1) Data frame sent\nI0226 00:29:35.885563 1571 log.go:172] (0xc000a5e420) (0xc000a3c280) Stream removed, broadcasting: 1\nI0226 00:29:35.888045 1571 log.go:172] (0xc000a5e420) (0xc000b30c80) Stream removed, broadcasting: 3\nI0226 00:29:35.888103 1571 log.go:172] (0xc000a5e420) (0xc000a3c320) Stream removed, broadcasting: 5\nI0226 00:29:35.888135 1571 log.go:172] (0xc000a5e420) Go away received\nI0226 00:29:35.888495 1571 log.go:172] (0xc000a5e420) (0xc000a3c280) Stream removed, broadcasting: 1\nI0226 00:29:35.888534 1571 log.go:172] (0xc000a5e420) (0xc000b30c80) Stream removed, broadcasting: 3\nI0226 00:29:35.888561 1571 log.go:172] (0xc000a5e420) (0xc000a3c320) Stream removed, broadcasting: 5\n" Feb 26 00:29:35.900: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 26 00:29:35.900: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 26 00:29:35.905: INFO: Found 1 stateful pods, waiting for 3 Feb 26 00:29:45.997: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 26 00:29:45.997: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 26 00:29:45.997: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 26 00:29:55.916: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 26 00:29:55.916: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 26 00:29:55.916: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 26 00:29:55.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5555 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 26 00:29:56.321: INFO: stderr: "I0226 00:29:56.113017 1591 log.go:172] (0xc0000f56b0) (0xc0008580a0) Create stream\nI0226 00:29:56.113151 1591 log.go:172] (0xc0000f56b0) (0xc0008580a0) Stream added, broadcasting: 1\nI0226 00:29:56.116897 1591 log.go:172] (0xc0000f56b0) Reply frame received for 1\nI0226 00:29:56.116979 1591 log.go:172] (0xc0000f56b0) (0xc000b5e140) Create stream\nI0226 00:29:56.116992 1591 log.go:172] (0xc0000f56b0) (0xc000b5e140) Stream added, broadcasting: 3\nI0226 00:29:56.118725 1591 log.go:172] (0xc0000f56b0) Reply frame received for 3\nI0226 00:29:56.118748 1591 log.go:172] (0xc0000f56b0) (0xc000b5e1e0) Create stream\nI0226 00:29:56.118756 1591 log.go:172] (0xc0000f56b0) (0xc000b5e1e0) Stream added, broadcasting: 5\nI0226 00:29:56.120434 1591 log.go:172] (0xc0000f56b0) Reply frame received for 5\nI0226 00:29:56.214238 1591 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0226 00:29:56.214294 1591 log.go:172] (0xc000b5e1e0) (5) Data frame handling\nI0226 00:29:56.214308 1591 log.go:172] (0xc000b5e1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 00:29:56.214331 1591 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0226 00:29:56.214343 1591 log.go:172] (0xc000b5e140) (3) Data frame handling\nI0226 00:29:56.214349 1591 log.go:172] (0xc000b5e140) (3) Data frame sent\nI0226 00:29:56.308882 1591 log.go:172] (0xc0000f56b0) Data frame received for 1\nI0226 00:29:56.308920 1591 log.go:172] (0xc0008580a0) (1) Data frame handling\nI0226 00:29:56.308938 1591 log.go:172] (0xc0008580a0) (1) Data frame sent\nI0226 00:29:56.308975 1591 log.go:172] (0xc0000f56b0) (0xc0008580a0) Stream removed, broadcasting: 1\nI0226 00:29:56.309157 1591 log.go:172] (0xc0000f56b0) (0xc000b5e140) Stream removed, broadcasting: 3\nI0226 00:29:56.309356 1591 log.go:172] (0xc0000f56b0) (0xc000b5e1e0) Stream removed, broadcasting: 5\nI0226 00:29:56.309958 1591 log.go:172] (0xc0000f56b0) Go away received\nI0226 00:29:56.311914 1591 log.go:172] (0xc0000f56b0) (0xc0008580a0) Stream removed, broadcasting: 1\nI0226 00:29:56.311938 1591 log.go:172] (0xc0000f56b0) (0xc000b5e140) Stream removed, broadcasting: 3\nI0226 00:29:56.311956 1591 log.go:172] (0xc0000f56b0) (0xc000b5e1e0) Stream removed, broadcasting: 5\n" Feb 26 00:29:56.322: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 26 00:29:56.322: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 26 00:29:56.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5555 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 26 00:29:56.756: INFO: stderr: "I0226 00:29:56.479888 1611 log.go:172] (0xc000936dc0) (0xc0009300a0) Create stream\nI0226 00:29:56.480047 1611 log.go:172] (0xc000936dc0) (0xc0009300a0) Stream added, broadcasting: 1\nI0226 00:29:56.493643 1611 log.go:172] (0xc000936dc0) Reply frame received for 1\nI0226 00:29:56.493867 1611 log.go:172] (0xc000936dc0) (0xc000641d60) Create stream\nI0226 00:29:56.493888 1611 log.go:172] (0xc000936dc0) (0xc000641d60) Stream added, broadcasting: 3\nI0226 00:29:56.496405 1611 log.go:172] (0xc000936dc0) Reply frame received for 3\nI0226 00:29:56.496584 1611 log.go:172] (0xc000936dc0) (0xc0005e6960) Create stream\nI0226 00:29:56.496609 1611 log.go:172] (0xc000936dc0) (0xc0005e6960) Stream added, broadcasting: 5\nI0226 00:29:56.501901 1611 log.go:172] (0xc000936dc0) Reply frame received for 5\nI0226 00:29:56.615296 1611 log.go:172] (0xc000936dc0) Data frame received for 5\nI0226 00:29:56.615339 1611 log.go:172] (0xc0005e6960) (5) Data frame handling\nI0226 00:29:56.615369 1611 log.go:172] (0xc0005e6960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 00:29:56.650089 1611 log.go:172] (0xc000936dc0) Data frame received for 3\nI0226 00:29:56.650131 1611 log.go:172] (0xc000641d60) (3) Data frame handling\nI0226 00:29:56.650164 1611 log.go:172] (0xc000641d60) (3) Data frame sent\nI0226 00:29:56.742471 1611 log.go:172] (0xc000936dc0) (0xc000641d60) Stream removed, broadcasting: 3\nI0226 00:29:56.742904 1611 log.go:172] (0xc000936dc0) Data frame received for 1\nI0226 00:29:56.743105 1611 log.go:172] (0xc000936dc0) (0xc0005e6960) Stream removed, broadcasting: 5\nI0226 00:29:56.743270 1611 log.go:172] (0xc0009300a0) (1) Data frame handling\nI0226 00:29:56.743326 1611 log.go:172] (0xc0009300a0) (1) Data frame sent\nI0226 00:29:56.743356 1611 log.go:172] (0xc000936dc0) (0xc0009300a0) Stream removed, broadcasting: 1\nI0226 00:29:56.743390 1611 log.go:172] (0xc000936dc0) Go away received\nI0226 00:29:56.744944 1611 log.go:172] (0xc000936dc0) (0xc0009300a0) Stream removed, broadcasting: 1\nI0226 00:29:56.745032 1611 log.go:172] (0xc000936dc0) (0xc000641d60) Stream removed, broadcasting: 3\nI0226 00:29:56.745109 1611 log.go:172] (0xc000936dc0) (0xc0005e6960) Stream removed, broadcasting: 5\n" Feb 26 00:29:56.756: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 26 00:29:56.756: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 26 00:29:56.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5555 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 26 00:29:57.183: INFO: stderr: "I0226 00:29:56.932902 1631 log.go:172] (0xc0000f42c0) (0xc0006208c0) Create stream\nI0226 00:29:56.933015 1631 log.go:172] (0xc0000f42c0) (0xc0006208c0) Stream added, broadcasting: 1\nI0226 00:29:56.954835 1631 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0226 00:29:56.954938 1631 log.go:172] (0xc0000f42c0) (0xc0006f9e00) Create stream\nI0226 00:29:56.954955 1631 log.go:172] (0xc0000f42c0) (0xc0006f9e00) Stream added, broadcasting: 3\nI0226 00:29:56.956576 1631 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0226 00:29:56.956606 1631 log.go:172] (0xc0000f42c0) (0xc0002fb540) Create stream\nI0226 00:29:56.956615 1631 log.go:172] (0xc0000f42c0) (0xc0002fb540) Stream added, broadcasting: 5\nI0226 00:29:56.957925 1631 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0226 00:29:57.060039 1631 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0226 00:29:57.060155 1631 log.go:172] (0xc0002fb540) (5) Data frame handling\nI0226 00:29:57.060203 1631 log.go:172] (0xc0002fb540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 00:29:57.099069 1631 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0226 00:29:57.099131 1631 log.go:172] (0xc0006f9e00) (3) Data frame handling\nI0226 00:29:57.099156 1631 log.go:172] (0xc0006f9e00) (3) Data frame sent\nI0226 00:29:57.172336 1631 log.go:172] (0xc0000f42c0) (0xc0006f9e00) Stream removed, broadcasting: 3\nI0226 00:29:57.172452 1631 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0226 00:29:57.172486 1631 log.go:172] (0xc0000f42c0) (0xc0002fb540) Stream removed, broadcasting: 5\nI0226 00:29:57.172743 1631 log.go:172] (0xc0006208c0) (1) Data frame handling\nI0226 00:29:57.172861 1631 log.go:172] (0xc0006208c0) (1) Data frame sent\nI0226 00:29:57.172925 1631 log.go:172] (0xc0000f42c0) (0xc0006208c0) Stream removed, broadcasting: 1\nI0226 00:29:57.172974 1631 log.go:172] (0xc0000f42c0) Go away received\nI0226 00:29:57.174086 1631 log.go:172] (0xc0000f42c0) (0xc0006208c0) Stream removed, broadcasting: 1\nI0226 00:29:57.174108 1631 log.go:172] (0xc0000f42c0) (0xc0006f9e00) Stream removed, broadcasting: 3\nI0226 00:29:57.174117 1631 log.go:172] (0xc0000f42c0) (0xc0002fb540) Stream removed, broadcasting: 5\n" Feb 26 00:29:57.183: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 26 00:29:57.183: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 26 00:29:57.183: INFO: Waiting for statefulset status.replicas updated to 0 Feb 26 00:29:57.209: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 26 00:30:07.230: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 26 00:30:07.230: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 26 00:30:07.230: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 26 00:30:07.257: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999548s Feb 26 00:30:08.267: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986358792s Feb 26 00:30:09.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975683523s Feb 26 00:30:10.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967513074s Feb 26 00:30:11.295: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958275244s Feb 26 00:30:12.305: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948469359s Feb 26 00:30:13.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.938384134s Feb 26 00:30:14.329: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.924110745s Feb 26 00:30:15.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.914523588s Feb 26 00:30:16.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 901.958621ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5555 Feb 26 00:30:17.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 26 00:30:17.826: INFO: stderr: "I0226 00:30:17.604094 1650 log.go:172] (0xc0003d8b00) (0xc000735720) Create stream\nI0226 00:30:17.604212 1650 log.go:172] (0xc0003d8b00) (0xc000735720) Stream added, broadcasting: 1\nI0226 00:30:17.608942 1650 log.go:172] (0xc0003d8b00) Reply frame received for 1\nI0226 00:30:17.609047 1650 log.go:172] (0xc0003d8b00) (0xc00064c000) Create stream\nI0226 00:30:17.609069 1650 log.go:172] (0xc0003d8b00) (0xc00064c000) Stream added, broadcasting: 3\nI0226 00:30:17.611502 1650 log.go:172] (0xc0003d8b00) Reply frame received for 3\nI0226 00:30:17.611587 1650 log.go:172] (0xc0003d8b00) (0xc000206000) Create stream\nI0226 00:30:17.611599 1650 log.go:172] (0xc0003d8b00) (0xc000206000) Stream added, broadcasting: 5\nI0226 00:30:17.613565 1650 log.go:172] (0xc0003d8b00) Reply frame received for 5\nI0226 00:30:17.720890 1650 log.go:172] (0xc0003d8b00) Data frame received for 3\nI0226 00:30:17.721233 1650 log.go:172] (0xc00064c000) (3) Data frame handling\nI0226 00:30:17.721334 1650 log.go:172] (0xc00064c000) (3) Data frame sent\nI0226 00:30:17.721955 1650 log.go:172] (0xc0003d8b00) Data frame received for 5\nI0226 00:30:17.721980 1650 log.go:172] (0xc000206000) (5) Data frame handling\nI0226 00:30:17.721999 1650 log.go:172] (0xc000206000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0226 00:30:17.809886 1650 log.go:172] (0xc0003d8b00) Data frame received for 1\nI0226 00:30:17.810036 1650 log.go:172] (0xc0003d8b00) (0xc000206000) Stream removed, broadcasting: 5\nI0226 00:30:17.810162 1650 log.go:172] (0xc000735720) (1) Data frame handling\nI0226 00:30:17.810222 1650 log.go:172] (0xc000735720) (1) Data frame sent\nI0226 00:30:17.810445 1650 log.go:172] (0xc0003d8b00) (0xc00064c000) Stream removed, broadcasting: 3\nI0226 00:30:17.810720 1650 log.go:172] (0xc0003d8b00) (0xc000735720) Stream removed, broadcasting: 1\nI0226 00:30:17.810855 1650 log.go:172] (0xc0003d8b00) Go away received\nI0226 00:30:17.812442 1650 log.go:172] (0xc0003d8b00) (0xc000735720) Stream removed, broadcasting: 1\nI0226 00:30:17.812482 1650 log.go:172] (0xc0003d8b00) (0xc00064c000) Stream removed, broadcasting: 3\nI0226 00:30:17.812502 1650 log.go:172] (0xc0003d8b00) (0xc000206000) Stream removed, broadcasting: 5\n" Feb 26 00:30:17.827: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 26 00:30:17.827: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 26 00:30:17.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5555 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 26 00:30:18.170: INFO: stderr: "I0226 00:30:18.002731 1671 log.go:172] (0xc000820d10) (0xc0008b6460) Create stream\nI0226 00:30:18.003000 1671 log.go:172] (0xc000820d10) (0xc0008b6460) Stream added, broadcasting: 1\nI0226 00:30:18.011778 1671 log.go:172] (0xc000820d10) Reply frame received for 1\nI0226 00:30:18.011873 1671 log.go:172] (0xc000820d10) (0xc0006926e0) Create stream\nI0226 00:30:18.011892 1671 log.go:172] (0xc000820d10) (0xc0006926e0) Stream added, broadcasting: 3\nI0226 00:30:18.013331 1671 log.go:172] (0xc000820d10) Reply frame received for 3\nI0226 00:30:18.013378 1671 log.go:172] (0xc000820d10) (0xc0004a7360) Create stream\nI0226 00:30:18.013386 1671 log.go:172] (0xc000820d10) (0xc0004a7360) Stream added, broadcasting: 5\nI0226 00:30:18.014286 1671 log.go:172] (0xc000820d10) Reply frame received for 5\nI0226 00:30:18.099423 1671 log.go:172] (0xc000820d10) Data frame received for 3\nI0226 00:30:18.099469 1671 log.go:172] (0xc0006926e0) (3) Data frame handling\nI0226 00:30:18.099489 1671 log.go:172] (0xc0006926e0) (3) Data frame sent\nI0226 00:30:18.099540 1671 log.go:172] (0xc000820d10) Data frame received for 5\nI0226 00:30:18.099549 1671 log.go:172] (0xc0004a7360) (5) Data frame handling\nI0226 00:30:18.099559 1671 log.go:172] (0xc0004a7360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0226 00:30:18.159993 1671 log.go:172] (0xc000820d10) Data frame received for 1\nI0226 00:30:18.160386 1671 log.go:172] (0xc000820d10) (0xc0004a7360) Stream removed, broadcasting: 5\nI0226 00:30:18.160446 1671 log.go:172] (0xc0008b6460) (1) Data frame handling\nI0226 00:30:18.160467 1671 log.go:172] (0xc0008b6460) (1) Data frame sent\nI0226 00:30:18.160488 1671 log.go:172] (0xc000820d10) (0xc0006926e0) Stream removed, broadcasting: 3\nI0226 00:30:18.160518 1671 log.go:172] (0xc000820d10) (0xc0008b6460) Stream removed, broadcasting: 1\nI0226 00:30:18.160537 1671 log.go:172] (0xc000820d10) Go away received\nI0226 00:30:18.161010 1671 log.go:172] (0xc000820d10) (0xc0008b6460) Stream removed, broadcasting: 1\nI0226 00:30:18.161024 1671 log.go:172] (0xc000820d10) (0xc0006926e0) Stream removed, broadcasting: 3\nI0226 00:30:18.161031 1671 log.go:172] (0xc000820d10) (0xc0004a7360) Stream removed, broadcasting: 5\n" Feb 26 00:30:18.170: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 26 00:30:18.170: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 26 00:30:18.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 26 00:30:18.554: INFO: stderr: "I0226 00:30:18.329238 1692 log.go:172] (0xc0000f58c0) (0xc0005c48c0) Create stream\nI0226 00:30:18.329341 1692 log.go:172] (0xc0000f58c0) (0xc0005c48c0) Stream added, broadcasting: 1\nI0226 00:30:18.353078 1692 log.go:172] (0xc0000f58c0) Reply frame received for 1\nI0226 00:30:18.353162 1692 log.go:172] (0xc0000f58c0) (0xc0006ca000) Create stream\nI0226 00:30:18.353192 1692 log.go:172] (0xc0000f58c0) (0xc0006ca000) Stream added, broadcasting: 3\nI0226 00:30:18.354958 1692 log.go:172] (0xc0000f58c0) Reply frame received for 3\nI0226 00:30:18.355032 1692 log.go:172] (0xc0000f58c0) (0xc0006ca0a0) Create stream\nI0226 00:30:18.355044 1692 log.go:172] (0xc0000f58c0) (0xc0006ca0a0) Stream added, broadcasting: 5\nI0226 00:30:18.356643 1692 log.go:172] (0xc0000f58c0) Reply frame received for 5\nI0226 00:30:18.440370 1692 log.go:172] (0xc0000f58c0) Data frame received for 5\nI0226 00:30:18.440494 1692 log.go:172] (0xc0006ca0a0) (5) Data frame handling\nI0226 00:30:18.440526 1692 log.go:172] (0xc0006ca0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0226 00:30:18.440570 1692 log.go:172] (0xc0000f58c0) Data frame received for 3\nI0226 00:30:18.440582 1692 log.go:172] (0xc0006ca000) (3) Data frame handling\nI0226 00:30:18.440616 1692 log.go:172] (0xc0006ca000) (3) Data frame sent\nI0226 00:30:18.537712 1692 log.go:172] (0xc0000f58c0) (0xc0006ca000) Stream removed, broadcasting: 3\nI0226 00:30:18.538061 1692 log.go:172] (0xc0000f58c0) Data frame received for 1\nI0226 00:30:18.538097 1692 log.go:172] (0xc0005c48c0) (1) Data frame handling\nI0226 00:30:18.538127 1692 log.go:172] (0xc0005c48c0) (1) Data frame sent\nI0226 00:30:18.538138 1692 log.go:172] (0xc0000f58c0) (0xc0005c48c0) Stream removed, broadcasting: 1\nI0226 00:30:18.538437 1692 log.go:172] (0xc0000f58c0) (0xc0006ca0a0) Stream removed, broadcasting: 5\nI0226 00:30:18.538809 1692 log.go:172] (0xc0000f58c0) Go away received\nI0226 00:30:18.540099 1692 log.go:172] (0xc0000f58c0) (0xc0005c48c0) Stream removed, broadcasting: 1\nI0226 00:30:18.540279 1692 log.go:172] (0xc0000f58c0) (0xc0006ca000) Stream removed, broadcasting: 3\nI0226 00:30:18.540292 1692 log.go:172] (0xc0000f58c0) (0xc0006ca0a0) Stream removed, broadcasting: 5\n" Feb 26 00:30:18.555: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 26 00:30:18.555: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 26 00:30:18.555: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 26 00:30:48.593: INFO: Deleting all statefulset in ns statefulset-5555 Feb 26 00:30:48.598: INFO: Scaling statefulset ss to 0 Feb 26 00:30:48.624: INFO: Waiting for statefulset status.replicas updated to 0 Feb 26 00:30:48.629: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:30:48.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5555" for this suite. • [SLOW TEST:103.912 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":109,"skipped":1705,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:30:48.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 26 00:30:48.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617" in namespace "projected-8104" to be "success or failure" Feb 26 00:30:48.907: INFO: Pod "downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617": Phase="Pending", Reason="", readiness=false. Elapsed: 25.497651ms Feb 26 00:30:50.916: INFO: Pod "downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034421483s Feb 26 00:30:52.925: INFO: Pod "downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042910436s Feb 26 00:30:54.932: INFO: Pod "downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050201705s Feb 26 00:30:56.936: INFO: Pod "downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054707268s Feb 26 00:30:58.948: INFO: Pod "downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066204523s STEP: Saw pod success Feb 26 00:30:58.948: INFO: Pod "downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617" satisfied condition "success or failure" Feb 26 00:30:58.953: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617 container client-container: STEP: delete the pod Feb 26 00:30:59.331: INFO: Waiting for pod downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617 to disappear Feb 26 00:30:59.402: INFO: Pod downwardapi-volume-c1933051-ec87-4a1d-bee4-8537310bb617 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:30:59.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8104" for this suite. • [SLOW TEST:10.721 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":110,"skipped":1705,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:30:59.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 26 00:30:59.632: INFO: Waiting up to 5m0s for pod "pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93" in namespace "emptydir-7963" to be "success or failure" Feb 26 00:30:59.708: INFO: Pod "pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 76.393216ms Feb 26 00:31:01.719: INFO: Pod "pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086976696s Feb 26 00:31:03.727: INFO: Pod "pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095292565s Feb 26 00:31:05.734: INFO: Pod "pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102533961s Feb 26 00:31:07.744: INFO: Pod "pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112136593s STEP: Saw pod success Feb 26 00:31:07.744: INFO: Pod "pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93" satisfied condition "success or failure" Feb 26 00:31:07.751: INFO: Trying to get logs from node jerma-node pod pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93 container test-container: STEP: delete the pod Feb 26 00:31:08.015: INFO: Waiting for pod pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93 to disappear Feb 26 00:31:08.027: INFO: Pod pod-edd19bdc-88d0-45ab-93a2-8524f18c0f93 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:31:08.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7963" for this suite. • [SLOW TEST:8.635 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":111,"skipped":1709,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:31:08.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 26 00:31:08.294: INFO: Waiting up to 5m0s for pod "pod-f8d2573c-fa23-49cf-8cc6-189136e2021e" in namespace "emptydir-3141" to be "success or failure" Feb 26 00:31:08.307: INFO: Pod "pod-f8d2573c-fa23-49cf-8cc6-189136e2021e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.324042ms Feb 26 00:31:10.316: INFO: Pod "pod-f8d2573c-fa23-49cf-8cc6-189136e2021e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022314605s Feb 26 00:31:12.322: INFO: Pod "pod-f8d2573c-fa23-49cf-8cc6-189136e2021e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028320595s Feb 26 00:31:14.331: INFO: Pod "pod-f8d2573c-fa23-49cf-8cc6-189136e2021e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037378417s Feb 26 00:31:16.339: INFO: Pod "pod-f8d2573c-fa23-49cf-8cc6-189136e2021e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045255157s Feb 26 00:31:18.347: INFO: Pod "pod-f8d2573c-fa23-49cf-8cc6-189136e2021e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053428745s STEP: Saw pod success Feb 26 00:31:18.348: INFO: Pod "pod-f8d2573c-fa23-49cf-8cc6-189136e2021e" satisfied condition "success or failure" Feb 26 00:31:18.352: INFO: Trying to get logs from node jerma-node pod pod-f8d2573c-fa23-49cf-8cc6-189136e2021e container test-container: STEP: delete the pod Feb 26 00:31:18.625: INFO: Waiting for pod pod-f8d2573c-fa23-49cf-8cc6-189136e2021e to disappear Feb 26 00:31:18.632: INFO: Pod pod-f8d2573c-fa23-49cf-8cc6-189136e2021e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:31:18.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3141" for this suite. • [SLOW TEST:10.603 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":1741,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:31:18.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:31:18.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1633" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":113,"skipped":1749,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:31:19.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Feb 26 00:31:30.075: INFO: Successfully updated pod "annotationupdate75105817-7e31-45ee-b510-641d9fba0b18" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:31:32.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7583" for this suite. • [SLOW TEST:13.095 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":114,"skipped":1769,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:31:32.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 26 00:31:44.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9694" for this suite. • [SLOW TEST:12.228 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":115,"skipped":1803,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 26 00:31:44.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 26 00:31:44.568: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.354213ms)
Feb 26 00:31:44.572: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.277963ms)
Feb 26 00:31:44.577: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.179228ms)
Feb 26 00:31:44.581: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.963952ms)
Feb 26 00:31:44.585: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.299118ms)
Feb 26 00:31:44.589: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.455146ms)
Feb 26 00:31:44.594: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.171718ms)
Feb 26 00:31:44.598: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.841752ms)
Feb 26 00:31:44.602: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.9739ms)
Feb 26 00:31:44.605: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.808816ms)
Feb 26 00:31:44.609: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.874626ms)
Feb 26 00:31:44.613: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.821658ms)
Feb 26 00:31:44.618: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.505413ms)
Feb 26 00:31:44.661: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 43.740779ms)
Feb 26 00:31:44.666: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.843983ms)
Feb 26 00:31:44.672: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.589248ms)
Feb 26 00:31:44.677: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.28452ms)
Feb 26 00:31:44.683: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.381651ms)
Feb 26 00:31:44.687: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.943627ms)
Feb 26 00:31:44.690: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.435736ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:31:44.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9952" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":116,"skipped":1811,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:31:44.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0226 00:31:47.473607       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 00:31:47.473: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:31:47.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7484" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":117,"skipped":1819,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:31:47.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-7856
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 26 00:31:48.606: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 26 00:31:48.681: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:31:50.711: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:31:52.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:31:55.175: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:31:56.974: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:31:59.037: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:32:01.401: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:32:03.122: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:32:04.773: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:32:06.743: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:32:08.694: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 26 00:32:10.689: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 26 00:32:12.692: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 26 00:32:14.691: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 26 00:32:16.690: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 26 00:32:18.691: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 26 00:32:20.690: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 26 00:32:22.691: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 26 00:32:22.701: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 26 00:32:32.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7856 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 00:32:32.850: INFO: >>> kubeConfig: /root/.kube/config
I0226 00:32:32.910948       9 log.go:172] (0xc002860e70) (0xc001e3e820) Create stream
I0226 00:32:32.911315       9 log.go:172] (0xc002860e70) (0xc001e3e820) Stream added, broadcasting: 1
I0226 00:32:32.916279       9 log.go:172] (0xc002860e70) Reply frame received for 1
I0226 00:32:32.916345       9 log.go:172] (0xc002860e70) (0xc00249b5e0) Create stream
I0226 00:32:32.916367       9 log.go:172] (0xc002860e70) (0xc00249b5e0) Stream added, broadcasting: 3
I0226 00:32:32.918416       9 log.go:172] (0xc002860e70) Reply frame received for 3
I0226 00:32:32.918453       9 log.go:172] (0xc002860e70) (0xc00249b680) Create stream
I0226 00:32:32.918469       9 log.go:172] (0xc002860e70) (0xc00249b680) Stream added, broadcasting: 5
I0226 00:32:32.920331       9 log.go:172] (0xc002860e70) Reply frame received for 5
I0226 00:32:33.020120       9 log.go:172] (0xc002860e70) Data frame received for 3
I0226 00:32:33.020219       9 log.go:172] (0xc00249b5e0) (3) Data frame handling
I0226 00:32:33.020239       9 log.go:172] (0xc00249b5e0) (3) Data frame sent
I0226 00:32:33.138743       9 log.go:172] (0xc002860e70) (0xc00249b5e0) Stream removed, broadcasting: 3
I0226 00:32:33.139226       9 log.go:172] (0xc002860e70) Data frame received for 1
I0226 00:32:33.139259       9 log.go:172] (0xc001e3e820) (1) Data frame handling
I0226 00:32:33.139324       9 log.go:172] (0xc001e3e820) (1) Data frame sent
I0226 00:32:33.139353       9 log.go:172] (0xc002860e70) (0xc001e3e820) Stream removed, broadcasting: 1
I0226 00:32:33.139716       9 log.go:172] (0xc002860e70) (0xc00249b680) Stream removed, broadcasting: 5
I0226 00:32:33.139892       9 log.go:172] (0xc002860e70) (0xc001e3e820) Stream removed, broadcasting: 1
I0226 00:32:33.139909       9 log.go:172] (0xc002860e70) (0xc00249b5e0) Stream removed, broadcasting: 3
I0226 00:32:33.139927       9 log.go:172] (0xc002860e70) (0xc00249b680) Stream removed, broadcasting: 5
I0226 00:32:33.140494       9 log.go:172] (0xc002860e70) Go away received
Feb 26 00:32:33.140: INFO: Found all expected endpoints: [netserver-0]
Feb 26 00:32:33.152: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7856 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 00:32:33.153: INFO: >>> kubeConfig: /root/.kube/config
I0226 00:32:33.228968       9 log.go:172] (0xc002e32160) (0xc0023fa280) Create stream
I0226 00:32:33.229414       9 log.go:172] (0xc002e32160) (0xc0023fa280) Stream added, broadcasting: 1
I0226 00:32:33.242928       9 log.go:172] (0xc002e32160) Reply frame received for 1
I0226 00:32:33.243214       9 log.go:172] (0xc002e32160) (0xc00249b860) Create stream
I0226 00:32:33.243275       9 log.go:172] (0xc002e32160) (0xc00249b860) Stream added, broadcasting: 3
I0226 00:32:33.245771       9 log.go:172] (0xc002e32160) Reply frame received for 3
I0226 00:32:33.245803       9 log.go:172] (0xc002e32160) (0xc001e3e960) Create stream
I0226 00:32:33.245821       9 log.go:172] (0xc002e32160) (0xc001e3e960) Stream added, broadcasting: 5
I0226 00:32:33.251131       9 log.go:172] (0xc002e32160) Reply frame received for 5
I0226 00:32:33.328347       9 log.go:172] (0xc002e32160) Data frame received for 3
I0226 00:32:33.328419       9 log.go:172] (0xc00249b860) (3) Data frame handling
I0226 00:32:33.328438       9 log.go:172] (0xc00249b860) (3) Data frame sent
I0226 00:32:33.398161       9 log.go:172] (0xc002e32160) (0xc001e3e960) Stream removed, broadcasting: 5
I0226 00:32:33.398259       9 log.go:172] (0xc002e32160) Data frame received for 1
I0226 00:32:33.398278       9 log.go:172] (0xc002e32160) (0xc00249b860) Stream removed, broadcasting: 3
I0226 00:32:33.398315       9 log.go:172] (0xc0023fa280) (1) Data frame handling
I0226 00:32:33.398328       9 log.go:172] (0xc0023fa280) (1) Data frame sent
I0226 00:32:33.398367       9 log.go:172] (0xc002e32160) (0xc0023fa280) Stream removed, broadcasting: 1
I0226 00:32:33.398382       9 log.go:172] (0xc002e32160) Go away received
I0226 00:32:33.398503       9 log.go:172] (0xc002e32160) (0xc0023fa280) Stream removed, broadcasting: 1
I0226 00:32:33.398526       9 log.go:172] (0xc002e32160) (0xc00249b860) Stream removed, broadcasting: 3
I0226 00:32:33.398539       9 log.go:172] (0xc002e32160) (0xc001e3e960) Stream removed, broadcasting: 5
Feb 26 00:32:33.398: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:32:33.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7856" for this suite.

• [SLOW TEST:46.309 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":118,"skipped":1824,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:32:33.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-01233efd-da0e-409f-aeee-62aba125aa74
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:32:33.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8485" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":119,"skipped":1856,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:32:34.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:32:34.166: INFO: Create a RollingUpdate DaemonSet
Feb 26 00:32:34.207: INFO: Check that daemon pods launch on every node of the cluster
Feb 26 00:32:34.226: INFO: Number of nodes with available pods: 0
Feb 26 00:32:34.226: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:35.239: INFO: Number of nodes with available pods: 0
Feb 26 00:32:35.240: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:36.976: INFO: Number of nodes with available pods: 0
Feb 26 00:32:36.976: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:37.388: INFO: Number of nodes with available pods: 0
Feb 26 00:32:37.388: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:38.264: INFO: Number of nodes with available pods: 0
Feb 26 00:32:38.265: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:39.244: INFO: Number of nodes with available pods: 0
Feb 26 00:32:39.244: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:43.357: INFO: Number of nodes with available pods: 0
Feb 26 00:32:43.357: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:45.177: INFO: Number of nodes with available pods: 0
Feb 26 00:32:45.177: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:45.694: INFO: Number of nodes with available pods: 0
Feb 26 00:32:45.695: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:48.025: INFO: Number of nodes with available pods: 0
Feb 26 00:32:48.025: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:49.972: INFO: Number of nodes with available pods: 0
Feb 26 00:32:49.972: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:50.250: INFO: Number of nodes with available pods: 0
Feb 26 00:32:50.251: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:51.237: INFO: Number of nodes with available pods: 0
Feb 26 00:32:51.237: INFO: Node jerma-node is running more than one daemon pod
Feb 26 00:32:52.238: INFO: Number of nodes with available pods: 2
Feb 26 00:32:52.238: INFO: Number of running nodes: 2, number of available pods: 2
Feb 26 00:32:52.238: INFO: Update the DaemonSet to trigger a rollout
Feb 26 00:32:52.244: INFO: Updating DaemonSet daemon-set
Feb 26 00:33:03.296: INFO: Roll back the DaemonSet before rollout is complete
Feb 26 00:33:03.303: INFO: Updating DaemonSet daemon-set
Feb 26 00:33:03.303: INFO: Make sure DaemonSet rollback is complete
Feb 26 00:33:03.344: INFO: Wrong image for pod: daemon-set-5btwp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 26 00:33:03.344: INFO: Pod daemon-set-5btwp is not available
Feb 26 00:33:04.386: INFO: Wrong image for pod: daemon-set-5btwp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 26 00:33:04.386: INFO: Pod daemon-set-5btwp is not available
Feb 26 00:33:05.378: INFO: Wrong image for pod: daemon-set-5btwp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 26 00:33:05.379: INFO: Pod daemon-set-5btwp is not available
Feb 26 00:33:06.381: INFO: Wrong image for pod: daemon-set-5btwp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 26 00:33:06.381: INFO: Pod daemon-set-5btwp is not available
Feb 26 00:33:07.381: INFO: Wrong image for pod: daemon-set-5btwp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 26 00:33:07.381: INFO: Pod daemon-set-5btwp is not available
Feb 26 00:33:08.383: INFO: Wrong image for pod: daemon-set-5btwp. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 26 00:33:08.383: INFO: Pod daemon-set-5btwp is not available
Feb 26 00:33:09.381: INFO: Pod daemon-set-p5kbz is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9331, will wait for the garbage collector to delete the pods
Feb 26 00:33:09.469: INFO: Deleting DaemonSet.extensions daemon-set took: 16.18189ms
Feb 26 00:33:09.769: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.600011ms
Feb 26 00:33:16.278: INFO: Number of nodes with available pods: 0
Feb 26 00:33:16.279: INFO: Number of running nodes: 0, number of available pods: 0
Feb 26 00:33:16.283: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9331/daemonsets","resourceVersion":"10767431"},"items":null}

Feb 26 00:33:16.287: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9331/pods","resourceVersion":"10767431"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:33:16.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9331" for this suite.

• [SLOW TEST:42.306 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":120,"skipped":1871,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:33:16.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 00:33:16.456: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034" in namespace "downward-api-686" to be "success or failure"
Feb 26 00:33:16.505: INFO: Pod "downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034": Phase="Pending", Reason="", readiness=false. Elapsed: 47.915367ms
Feb 26 00:33:18.624: INFO: Pod "downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167161867s
Feb 26 00:33:20.633: INFO: Pod "downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176196534s
Feb 26 00:33:22.645: INFO: Pod "downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187793363s
Feb 26 00:33:24.655: INFO: Pod "downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198183476s
Feb 26 00:33:26.661: INFO: Pod "downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.204449228s
STEP: Saw pod success
Feb 26 00:33:26.661: INFO: Pod "downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034" satisfied condition "success or failure"
Feb 26 00:33:26.665: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034 container client-container: 
STEP: delete the pod
Feb 26 00:33:26.950: INFO: Waiting for pod downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034 to disappear
Feb 26 00:33:26.956: INFO: Pod downwardapi-volume-9878b979-ce8e-4dfa-bc9d-1b1119974034 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:33:26.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-686" for this suite.

• [SLOW TEST:10.673 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":121,"skipped":1886,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:33:26.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:33:27.165: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 26 00:33:29.738: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:33:29.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4817" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":122,"skipped":1899,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:33:29.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7248
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-7248
I0226 00:33:31.111621       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7248, replica count: 2
I0226 00:33:34.163385       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:33:37.163826       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:33:40.165028       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:33:43.165958       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:33:46.166379       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:33:49.167057       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:33:52.167746       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 26 00:33:52.167: INFO: Creating new exec pod
Feb 26 00:33:59.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7248 execpodgs584 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 26 00:33:59.651: INFO: stderr: "I0226 00:33:59.494955    1715 log.go:172] (0xc000ab4160) (0xc000b26140) Create stream\nI0226 00:33:59.495489    1715 log.go:172] (0xc000ab4160) (0xc000b26140) Stream added, broadcasting: 1\nI0226 00:33:59.499286    1715 log.go:172] (0xc000ab4160) Reply frame received for 1\nI0226 00:33:59.499361    1715 log.go:172] (0xc000ab4160) (0xc000a7c0a0) Create stream\nI0226 00:33:59.499376    1715 log.go:172] (0xc000ab4160) (0xc000a7c0a0) Stream added, broadcasting: 3\nI0226 00:33:59.500763    1715 log.go:172] (0xc000ab4160) Reply frame received for 3\nI0226 00:33:59.500800    1715 log.go:172] (0xc000ab4160) (0xc000b261e0) Create stream\nI0226 00:33:59.500816    1715 log.go:172] (0xc000ab4160) (0xc000b261e0) Stream added, broadcasting: 5\nI0226 00:33:59.503635    1715 log.go:172] (0xc000ab4160) Reply frame received for 5\nI0226 00:33:59.572198    1715 log.go:172] (0xc000ab4160) Data frame received for 5\nI0226 00:33:59.572315    1715 log.go:172] (0xc000b261e0) (5) Data frame handling\nI0226 00:33:59.572356    1715 log.go:172] (0xc000b261e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0226 00:33:59.576326    1715 log.go:172] (0xc000ab4160) Data frame received for 5\nI0226 00:33:59.576441    1715 log.go:172] (0xc000b261e0) (5) Data frame handling\nI0226 00:33:59.576536    1715 log.go:172] (0xc000b261e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0226 00:33:59.641384    1715 log.go:172] (0xc000ab4160) (0xc000a7c0a0) Stream removed, broadcasting: 3\nI0226 00:33:59.641541    1715 log.go:172] (0xc000ab4160) Data frame received for 1\nI0226 00:33:59.641562    1715 log.go:172] (0xc000b26140) (1) Data frame handling\nI0226 00:33:59.641584    1715 log.go:172] (0xc000b26140) (1) Data frame sent\nI0226 00:33:59.641604    1715 log.go:172] (0xc000ab4160) (0xc000b26140) Stream removed, broadcasting: 1\nI0226 00:33:59.641706    1715 log.go:172] (0xc000ab4160) (0xc000b261e0) Stream removed, broadcasting: 5\nI0226 00:33:59.641753    1715 log.go:172] (0xc000ab4160) Go away received\nI0226 00:33:59.642985    1715 log.go:172] (0xc000ab4160) (0xc000b26140) Stream removed, broadcasting: 1\nI0226 00:33:59.642996    1715 log.go:172] (0xc000ab4160) (0xc000a7c0a0) Stream removed, broadcasting: 3\nI0226 00:33:59.643000    1715 log.go:172] (0xc000ab4160) (0xc000b261e0) Stream removed, broadcasting: 5\n"
Feb 26 00:33:59.651: INFO: stdout: ""
Feb 26 00:33:59.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7248 execpodgs584 -- /bin/sh -x -c nc -zv -t -w 2 10.96.110.37 80'
Feb 26 00:34:00.061: INFO: stderr: "I0226 00:33:59.811603    1735 log.go:172] (0xc000bda420) (0xc0006d5f40) Create stream\nI0226 00:33:59.811806    1735 log.go:172] (0xc000bda420) (0xc0006d5f40) Stream added, broadcasting: 1\nI0226 00:33:59.816197    1735 log.go:172] (0xc000bda420) Reply frame received for 1\nI0226 00:33:59.816235    1735 log.go:172] (0xc000bda420) (0xc000a76140) Create stream\nI0226 00:33:59.816246    1735 log.go:172] (0xc000bda420) (0xc000a76140) Stream added, broadcasting: 3\nI0226 00:33:59.818257    1735 log.go:172] (0xc000bda420) Reply frame received for 3\nI0226 00:33:59.818362    1735 log.go:172] (0xc000bda420) (0xc000b0c820) Create stream\nI0226 00:33:59.818378    1735 log.go:172] (0xc000bda420) (0xc000b0c820) Stream added, broadcasting: 5\nI0226 00:33:59.819961    1735 log.go:172] (0xc000bda420) Reply frame received for 5\nI0226 00:33:59.941094    1735 log.go:172] (0xc000bda420) Data frame received for 5\nI0226 00:33:59.941258    1735 log.go:172] (0xc000b0c820) (5) Data frame handling\nI0226 00:33:59.941303    1735 log.go:172] (0xc000b0c820) (5) Data frame sent\nI0226 00:33:59.941320    1735 log.go:172] (0xc000bda420) Data frame received for 5\nI0226 00:33:59.941331    1735 log.go:172] (0xc000b0c820) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.110.37 80I0226 00:33:59.941448    1735 log.go:172] (0xc000b0c820) (5) Data frame sent\nI0226 00:33:59.941492    1735 log.go:172] (0xc000bda420) Data frame received for 5\nI0226 00:33:59.941506    1735 log.go:172] (0xc000b0c820) (5) Data frame handling\nI0226 00:33:59.941530    1735 log.go:172] (0xc000b0c820) (5) Data frame sent\n\nI0226 00:33:59.944150    1735 log.go:172] (0xc000bda420) Data frame received for 5\nI0226 00:33:59.944278    1735 log.go:172] (0xc000b0c820) (5) Data frame handling\nI0226 00:33:59.944358    1735 log.go:172] (0xc000b0c820) (5) Data frame sent\nConnection to 10.96.110.37 80 port [tcp/http] succeeded!\nI0226 00:34:00.046570    1735 log.go:172] (0xc000bda420) (0xc000a76140) Stream removed, broadcasting: 3\nI0226 00:34:00.046900    1735 log.go:172] (0xc000bda420) Data frame received for 1\nI0226 00:34:00.046927    1735 log.go:172] (0xc000bda420) (0xc000b0c820) Stream removed, broadcasting: 5\nI0226 00:34:00.046958    1735 log.go:172] (0xc0006d5f40) (1) Data frame handling\nI0226 00:34:00.046968    1735 log.go:172] (0xc0006d5f40) (1) Data frame sent\nI0226 00:34:00.046974    1735 log.go:172] (0xc000bda420) (0xc0006d5f40) Stream removed, broadcasting: 1\nI0226 00:34:00.046984    1735 log.go:172] (0xc000bda420) Go away received\nI0226 00:34:00.048162    1735 log.go:172] (0xc000bda420) (0xc0006d5f40) Stream removed, broadcasting: 1\nI0226 00:34:00.048179    1735 log.go:172] (0xc000bda420) (0xc000a76140) Stream removed, broadcasting: 3\nI0226 00:34:00.048197    1735 log.go:172] (0xc000bda420) (0xc000b0c820) Stream removed, broadcasting: 5\n"
Feb 26 00:34:00.062: INFO: stdout: ""
Feb 26 00:34:00.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7248 execpodgs584 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32240'
Feb 26 00:34:00.489: INFO: stderr: "I0226 00:34:00.276743    1755 log.go:172] (0xc000a36210) (0xc000b023c0) Create stream\nI0226 00:34:00.276868    1755 log.go:172] (0xc000a36210) (0xc000b023c0) Stream added, broadcasting: 1\nI0226 00:34:00.279759    1755 log.go:172] (0xc000a36210) Reply frame received for 1\nI0226 00:34:00.279782    1755 log.go:172] (0xc000a36210) (0xc000b02460) Create stream\nI0226 00:34:00.279793    1755 log.go:172] (0xc000a36210) (0xc000b02460) Stream added, broadcasting: 3\nI0226 00:34:00.280706    1755 log.go:172] (0xc000a36210) Reply frame received for 3\nI0226 00:34:00.280745    1755 log.go:172] (0xc000a36210) (0xc000ab80a0) Create stream\nI0226 00:34:00.280768    1755 log.go:172] (0xc000a36210) (0xc000ab80a0) Stream added, broadcasting: 5\nI0226 00:34:00.283685    1755 log.go:172] (0xc000a36210) Reply frame received for 5\nI0226 00:34:00.374336    1755 log.go:172] (0xc000a36210) Data frame received for 5\nI0226 00:34:00.374468    1755 log.go:172] (0xc000ab80a0) (5) Data frame handling\nI0226 00:34:00.374527    1755 log.go:172] (0xc000ab80a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32240\nI0226 00:34:00.374980    1755 log.go:172] (0xc000a36210) Data frame received for 5\nI0226 00:34:00.375068    1755 log.go:172] (0xc000ab80a0) (5) Data frame handling\nI0226 00:34:00.375110    1755 log.go:172] (0xc000ab80a0) (5) Data frame sent\nConnection to 10.96.2.250 32240 port [tcp/32240] succeeded!\nI0226 00:34:00.462466    1755 log.go:172] (0xc000a36210) (0xc000b02460) Stream removed, broadcasting: 3\nI0226 00:34:00.462637    1755 log.go:172] (0xc000a36210) Data frame received for 1\nI0226 00:34:00.462659    1755 log.go:172] (0xc000b023c0) (1) Data frame handling\nI0226 00:34:00.462678    1755 log.go:172] (0xc000b023c0) (1) Data frame sent\nI0226 00:34:00.462729    1755 log.go:172] (0xc000a36210) (0xc000b023c0) Stream removed, broadcasting: 1\nI0226 00:34:00.464005    1755 log.go:172] (0xc000a36210) (0xc000ab80a0) Stream removed, broadcasting: 5\nI0226 00:34:00.464034    1755 log.go:172] (0xc000a36210) Go away received\nI0226 00:34:00.464404    1755 log.go:172] (0xc000a36210) (0xc000b023c0) Stream removed, broadcasting: 1\nI0226 00:34:00.464590    1755 log.go:172] (0xc000a36210) (0xc000b02460) Stream removed, broadcasting: 3\nI0226 00:34:00.464608    1755 log.go:172] (0xc000a36210) (0xc000ab80a0) Stream removed, broadcasting: 5\n"
Feb 26 00:34:00.489: INFO: stdout: ""
Feb 26 00:34:00.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7248 execpodgs584 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32240'
Feb 26 00:34:00.869: INFO: stderr: "I0226 00:34:00.673495    1775 log.go:172] (0xc000441ad0) (0xc000556280) Create stream\nI0226 00:34:00.673734    1775 log.go:172] (0xc000441ad0) (0xc000556280) Stream added, broadcasting: 1\nI0226 00:34:00.678402    1775 log.go:172] (0xc000441ad0) Reply frame received for 1\nI0226 00:34:00.678433    1775 log.go:172] (0xc000441ad0) (0xc000796820) Create stream\nI0226 00:34:00.678438    1775 log.go:172] (0xc000441ad0) (0xc000796820) Stream added, broadcasting: 3\nI0226 00:34:00.679979    1775 log.go:172] (0xc000441ad0) Reply frame received for 3\nI0226 00:34:00.680081    1775 log.go:172] (0xc000441ad0) (0xc00053b4a0) Create stream\nI0226 00:34:00.680099    1775 log.go:172] (0xc000441ad0) (0xc00053b4a0) Stream added, broadcasting: 5\nI0226 00:34:00.683282    1775 log.go:172] (0xc000441ad0) Reply frame received for 5\nI0226 00:34:00.763734    1775 log.go:172] (0xc000441ad0) Data frame received for 5\nI0226 00:34:00.763784    1775 log.go:172] (0xc00053b4a0) (5) Data frame handling\nI0226 00:34:00.763796    1775 log.go:172] (0xc00053b4a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32240\nI0226 00:34:00.772207    1775 log.go:172] (0xc000441ad0) Data frame received for 5\nI0226 00:34:00.772249    1775 log.go:172] (0xc00053b4a0) (5) Data frame handling\nI0226 00:34:00.772266    1775 log.go:172] (0xc00053b4a0) (5) Data frame sent\nConnection to 10.96.1.234 32240 port [tcp/32240] succeeded!\nI0226 00:34:00.847522    1775 log.go:172] (0xc000441ad0) Data frame received for 1\nI0226 00:34:00.847572    1775 log.go:172] (0xc000556280) (1) Data frame handling\nI0226 00:34:00.847616    1775 log.go:172] (0xc000556280) (1) Data frame sent\nI0226 00:34:00.847836    1775 log.go:172] (0xc000441ad0) (0xc000556280) Stream removed, broadcasting: 1\nI0226 00:34:00.849060    1775 log.go:172] (0xc000441ad0) (0xc000796820) Stream removed, broadcasting: 3\nI0226 00:34:00.849384    1775 log.go:172] (0xc000441ad0) (0xc00053b4a0) Stream removed, broadcasting: 5\nI0226 00:34:00.849438    1775 log.go:172] (0xc000441ad0) (0xc000556280) Stream removed, broadcasting: 1\nI0226 00:34:00.849446    1775 log.go:172] (0xc000441ad0) (0xc000796820) Stream removed, broadcasting: 3\nI0226 00:34:00.849451    1775 log.go:172] (0xc000441ad0) (0xc00053b4a0) Stream removed, broadcasting: 5\n"
Feb 26 00:34:00.870: INFO: stdout: ""
Feb 26 00:34:00.870: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:34:00.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7248" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:31.015 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":123,"skipped":1918,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:34:00.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb 26 00:34:01.092: INFO: >>> kubeConfig: /root/.kube/config
Feb 26 00:34:04.633: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:34:17.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5659" for this suite.

• [SLOW TEST:16.781 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":124,"skipped":1925,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:34:17.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-eaa33111-58fa-47e2-a6a8-34f4f2b820de
STEP: Creating a pod to test consume configMaps
Feb 26 00:34:17.884: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb" in namespace "projected-9548" to be "success or failure"
Feb 26 00:34:17.905: INFO: Pod "pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.751167ms
Feb 26 00:34:19.918: INFO: Pod "pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033832854s
Feb 26 00:34:21.927: INFO: Pod "pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042633081s
Feb 26 00:34:23.935: INFO: Pod "pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050514967s
Feb 26 00:34:26.327: INFO: Pod "pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.442526388s
Feb 26 00:34:28.338: INFO: Pod "pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.453682982s
STEP: Saw pod success
Feb 26 00:34:28.338: INFO: Pod "pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb" satisfied condition "success or failure"
Feb 26 00:34:28.346: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 00:34:28.514: INFO: Waiting for pod pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb to disappear
Feb 26 00:34:28.548: INFO: Pod pod-projected-configmaps-3dbf07d7-b94d-43d0-be14-43fb64b220cb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:34:28.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9548" for this suite.

• [SLOW TEST:10.870 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":125,"skipped":1932,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:34:28.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's args
Feb 26 00:34:28.811: INFO: Waiting up to 5m0s for pod "var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03" in namespace "var-expansion-6420" to be "success or failure"
Feb 26 00:34:28.840: INFO: Pod "var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03": Phase="Pending", Reason="", readiness=false. Elapsed: 28.836441ms
Feb 26 00:34:30.847: INFO: Pod "var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035548331s
Feb 26 00:34:32.947: INFO: Pod "var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13623481s
Feb 26 00:34:34.962: INFO: Pod "var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150431534s
Feb 26 00:34:36.970: INFO: Pod "var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159197015s
Feb 26 00:34:38.983: INFO: Pod "var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.172302816s
STEP: Saw pod success
Feb 26 00:34:38.984: INFO: Pod "var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03" satisfied condition "success or failure"
Feb 26 00:34:38.989: INFO: Trying to get logs from node jerma-node pod var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03 container dapi-container: 
STEP: delete the pod
Feb 26 00:34:39.160: INFO: Waiting for pod var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03 to disappear
Feb 26 00:34:39.166: INFO: Pod var-expansion-92670031-64d1-44a8-9e59-8b8af87a4b03 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:34:39.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6420" for this suite.

• [SLOW TEST:10.564 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":126,"skipped":1935,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:34:39.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-a134008c-e70c-48f7-a623-1ed58f67acbd
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:34:53.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9549" for this suite.

• [SLOW TEST:14.382 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":127,"skipped":1947,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:34:53.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:34:53.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 26 00:34:55.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7023 create -f -'
Feb 26 00:34:57.889: INFO: stderr: ""
Feb 26 00:34:57.889: INFO: stdout: "e2e-test-crd-publish-openapi-523-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 26 00:34:57.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7023 delete e2e-test-crd-publish-openapi-523-crds test-cr'
Feb 26 00:34:58.039: INFO: stderr: ""
Feb 26 00:34:58.039: INFO: stdout: "e2e-test-crd-publish-openapi-523-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb 26 00:34:58.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7023 apply -f -'
Feb 26 00:34:58.359: INFO: stderr: ""
Feb 26 00:34:58.359: INFO: stdout: "e2e-test-crd-publish-openapi-523-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 26 00:34:58.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7023 delete e2e-test-crd-publish-openapi-523-crds test-cr'
Feb 26 00:34:58.473: INFO: stderr: ""
Feb 26 00:34:58.473: INFO: stdout: "e2e-test-crd-publish-openapi-523-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 26 00:34:58.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-523-crds'
Feb 26 00:34:58.754: INFO: stderr: ""
Feb 26 00:34:58.755: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-523-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:35:01.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7023" for this suite.

• [SLOW TEST:8.125 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":128,"skipped":1949,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:35:01.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:35:09.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9139" for this suite.

• [SLOW TEST:8.210 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":129,"skipped":1950,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:35:09.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:35:10.056: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 26 00:35:15.077: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 26 00:35:19.437: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 26 00:35:29.512: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-6230 /apis/apps/v1/namespaces/deployment-6230/deployments/test-cleanup-deployment 8dc746c1-1e8b-4f1b-bafc-9a2f6453395a 10768088 1 2020-02-26 00:35:19 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055ef1f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-26 00:35:19 +0000 UTC,LastTransitionTime:2020-02-26 00:35:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-02-26 00:35:28 +0000 UTC,LastTransitionTime:2020-02-26 00:35:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 26 00:35:29.517: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-6230 /apis/apps/v1/namespaces/deployment-6230/replicasets/test-cleanup-deployment-55ffc6b7b6 1395a19c-5355-4cec-bc9b-9bf2b43c342c 10768074 1 2020-02-26 00:35:19 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 8dc746c1-1e8b-4f1b-bafc-9a2f6453395a 0xc0055ef647 0xc0055ef648}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055ef6c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 26 00:35:29.523: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-gljzh" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-gljzh test-cleanup-deployment-55ffc6b7b6- deployment-6230 /api/v1/namespaces/deployment-6230/pods/test-cleanup-deployment-55ffc6b7b6-gljzh e3cb790e-18e3-46a7-8ddc-63c360a927f4 10768073 0 2020-02-26 00:35:19 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 1395a19c-5355-4cec-bc9b-9bf2b43c342c 0xc0056128e7 0xc0056128e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z4fzh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z4fzh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z4fzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:35:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:35:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:35:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:35:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-26 00:35:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 00:35:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://624b2e7134d5b55cdee24468787b9cc299b711b5f143ba61daaac770840967ce,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:35:29.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6230" for this suite.

• [SLOW TEST:19.606 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":130,"skipped":1972,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:35:29.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 26 00:35:29.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3428'
Feb 26 00:35:29.879: INFO: stderr: ""
Feb 26 00:35:29.880: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868
Feb 26 00:35:29.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3428'
Feb 26 00:35:39.334: INFO: stderr: ""
Feb 26 00:35:39.334: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:35:39.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3428" for this suite.

• [SLOW TEST:9.808 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":280,"completed":131,"skipped":2004,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:35:39.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting the proxy server
Feb 26 00:35:39.533: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:35:39.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1737" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":280,"completed":132,"skipped":2062,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:35:39.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:35:39.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5720'
Feb 26 00:35:40.077: INFO: stderr: ""
Feb 26 00:35:40.077: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb 26 00:35:40.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5720'
Feb 26 00:35:40.380: INFO: stderr: ""
Feb 26 00:35:40.380: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 26 00:35:41.390: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:35:41.390: INFO: Found 0 / 1
Feb 26 00:35:42.401: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:35:42.402: INFO: Found 0 / 1
Feb 26 00:35:43.391: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:35:43.391: INFO: Found 0 / 1
Feb 26 00:35:44.391: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:35:44.392: INFO: Found 0 / 1
Feb 26 00:35:45.404: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:35:45.404: INFO: Found 0 / 1
Feb 26 00:35:46.389: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:35:46.389: INFO: Found 0 / 1
Feb 26 00:35:47.459: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:35:47.460: INFO: Found 1 / 1
Feb 26 00:35:47.460: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 26 00:35:47.468: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:35:47.468: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 26 00:35:47.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-dgv2b --namespace=kubectl-5720'
Feb 26 00:35:47.681: INFO: stderr: ""
Feb 26 00:35:47.681: INFO: stdout: "Name:         agnhost-master-dgv2b\nNamespace:    kubectl-5720\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Wed, 26 Feb 2020 00:35:40 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.2\nIPs:\n  IP:           10.44.0.2\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://935d63e0129c52e44bbb6ce51da2455af8f6406b99fa8760c684f314d5186fc6\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 26 Feb 2020 00:35:45 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r2m6r (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-r2m6r:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-r2m6r\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-5720/agnhost-master-dgv2b to jerma-node\n  Normal  Pulled     4s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Feb 26 00:35:47.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5720'
Feb 26 00:35:47.799: INFO: stderr: ""
Feb 26 00:35:47.799: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5720\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-dgv2b\n"
Feb 26 00:35:47.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5720'
Feb 26 00:35:47.922: INFO: stderr: ""
Feb 26 00:35:47.922: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5720\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.196.240\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.2:6379\nSession Affinity:  None\nEvents:            \n"
Feb 26 00:35:47.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb 26 00:35:48.032: INFO: stderr: ""
Feb 26 00:35:48.032: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Wed, 26 Feb 2020 00:35:38 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Wed, 26 Feb 2020 00:32:37 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 26 Feb 2020 00:32:37 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 26 Feb 2020 00:32:37 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 26 Feb 2020 00:32:37 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (4 in total)\n  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         52d\n  kube-system                 weave-net-kz8lv                                             20m (0%)      0 (0%)      0 (0%)           0 (0%)         52d\n  kubectl-5720                agnhost-master-dgv2b                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\n  kubelet-test-9139           busybox-host-aliasescf653c8d-95b5-4a37-b5f3-7eb8c3fed255    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 26 00:35:48.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5720'
Feb 26 00:35:48.154: INFO: stderr: ""
Feb 26 00:35:48.154: INFO: stdout: "Name:         kubectl-5720\nLabels:       e2e-framework=kubectl\n              e2e-run=0ce5b58f-0b63-4464-ac09-7fcd812e15c5\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:35:48.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5720" for this suite.

• [SLOW TEST:8.522 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":280,"completed":133,"skipped":2094,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:35:48.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-0116569b-a63f-47c4-88ff-590cb648daaa
STEP: Creating a pod to test consume configMaps
Feb 26 00:35:48.375: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb" in namespace "configmap-2178" to be "success or failure"
Feb 26 00:35:48.396: INFO: Pod "pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.292326ms
Feb 26 00:35:52.929: INFO: Pod "pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553366838s
Feb 26 00:35:54.948: INFO: Pod "pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573062593s
Feb 26 00:35:56.957: INFO: Pod "pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.581514341s
Feb 26 00:35:58.967: INFO: Pod "pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.591140565s
Feb 26 00:36:00.974: INFO: Pod "pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.598440019s
STEP: Saw pod success
Feb 26 00:36:00.974: INFO: Pod "pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb" satisfied condition "success or failure"
Feb 26 00:36:00.979: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb container configmap-volume-test: 
STEP: delete the pod
Feb 26 00:36:01.209: INFO: Waiting for pod pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb to disappear
Feb 26 00:36:01.230: INFO: Pod pod-configmaps-9f139592-b012-4ebe-a0fc-8982d3ae16fb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:36:01.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2178" for this suite.

• [SLOW TEST:13.099 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":134,"skipped":2111,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:36:01.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 00:36:01.522: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31" in namespace "projected-3740" to be "success or failure"
Feb 26 00:36:01.532: INFO: Pod "downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31": Phase="Pending", Reason="", readiness=false. Elapsed: 9.514606ms
Feb 26 00:36:03.562: INFO: Pod "downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039533764s
Feb 26 00:36:05.569: INFO: Pod "downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046618396s
Feb 26 00:36:07.616: INFO: Pod "downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092986355s
Feb 26 00:36:09.629: INFO: Pod "downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105814537s
Feb 26 00:36:11.655: INFO: Pod "downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131823636s
STEP: Saw pod success
Feb 26 00:36:11.655: INFO: Pod "downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31" satisfied condition "success or failure"
Feb 26 00:36:11.663: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31 container client-container: 
STEP: delete the pod
Feb 26 00:36:11.819: INFO: Waiting for pod downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31 to disappear
Feb 26 00:36:11.839: INFO: Pod downwardapi-volume-2579dedb-078a-45a2-96e4-2503c6676d31 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:36:11.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3740" for this suite.

• [SLOW TEST:10.605 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":135,"skipped":2121,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:36:11.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-9d7a5114-94cd-4661-829d-83eb8a87bb1e
STEP: Creating a pod to test consume secrets
Feb 26 00:36:12.069: INFO: Waiting up to 5m0s for pod "pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6" in namespace "secrets-1421" to be "success or failure"
Feb 26 00:36:12.179: INFO: Pod "pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 110.333888ms
Feb 26 00:36:14.220: INFO: Pod "pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150464798s
Feb 26 00:36:16.226: INFO: Pod "pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156870761s
Feb 26 00:36:18.233: INFO: Pod "pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163842246s
Feb 26 00:36:20.240: INFO: Pod "pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171205263s
STEP: Saw pod success
Feb 26 00:36:20.240: INFO: Pod "pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6" satisfied condition "success or failure"
Feb 26 00:36:20.245: INFO: Trying to get logs from node jerma-node pod pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6 container secret-volume-test: 
STEP: delete the pod
Feb 26 00:36:20.302: INFO: Waiting for pod pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6 to disappear
Feb 26 00:36:20.309: INFO: Pod pod-secrets-d1867b3a-5ee8-4c9f-aebb-212ebbc34fa6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:36:20.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1421" for this suite.

• [SLOW TEST:8.455 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":136,"skipped":2133,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:36:20.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:36:20.442: INFO: Creating deployment "test-recreate-deployment"
Feb 26 00:36:20.449: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 26 00:36:20.475: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 26 00:36:22.488: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 26 00:36:22.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 00:36:24.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 00:36:26.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718274180, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 00:36:28.506: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 26 00:36:28.521: INFO: Updating deployment test-recreate-deployment
Feb 26 00:36:28.521: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 26 00:36:29.162: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-2809 /apis/apps/v1/namespaces/deployment-2809/deployments/test-recreate-deployment 88b5e7b9-6369-4429-85eb-55751fe10f2f 10768441 2 2020-02-26 00:36:20 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005717848  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-26 00:36:28 +0000 UTC,LastTransitionTime:2020-02-26 00:36:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-26 00:36:28 +0000 UTC,LastTransitionTime:2020-02-26 00:36:20 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 26 00:36:29.175: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-2809 /apis/apps/v1/namespaces/deployment-2809/replicasets/test-recreate-deployment-5f94c574ff 0949e1be-e9b5-4dbc-b347-26b5b579d881 10768438 1 2020-02-26 00:36:28 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 88b5e7b9-6369-4429-85eb-55751fe10f2f 0xc005717c57 0xc005717c58}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005717cc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 26 00:36:29.175: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 26 00:36:29.175: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-2809 /apis/apps/v1/namespaces/deployment-2809/replicasets/test-recreate-deployment-799c574856 8a35cd70-21bb-4d9e-8999-2307233c07be 10768430 2 2020-02-26 00:36:20 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 88b5e7b9-6369-4429-85eb-55751fe10f2f 0xc005717d37 0xc005717d38}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005717da8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 26 00:36:29.189: INFO: Pod "test-recreate-deployment-5f94c574ff-brbtd" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-brbtd test-recreate-deployment-5f94c574ff- deployment-2809 /api/v1/namespaces/deployment-2809/pods/test-recreate-deployment-5f94c574ff-brbtd 904a50c2-4f41-4cec-bc61-0b0f1b39e9b1 10768436 0 2020-02-26 00:36:28 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 0949e1be-e9b5-4dbc-b347-26b5b579d881 0xc005788257 0xc005788258}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gmcfv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gmcfv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gmcfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:36:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:36:29.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2809" for this suite.

• [SLOW TEST:8.875 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":137,"skipped":2145,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:36:29.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Feb 26 00:36:29.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 26 00:36:29.554: INFO: stderr: ""
Feb 26 00:36:29.554: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:36:29.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5281" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":280,"completed":138,"skipped":2175,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:36:29.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 26 00:36:29.834: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 26 00:36:29.865: INFO: Waiting for terminating namespaces to be deleted...
Feb 26 00:36:29.869: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 26 00:36:29.880: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 26 00:36:29.880: INFO: 	Container weave ready: true, restart count 1
Feb 26 00:36:29.880: INFO: 	Container weave-npc ready: true, restart count 0
Feb 26 00:36:29.880: INFO: test-recreate-deployment-5f94c574ff-brbtd from deployment-2809 started at 2020-02-26 00:36:29 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.880: INFO: 	Container httpd ready: false, restart count 0
Feb 26 00:36:29.880: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.880: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 26 00:36:29.880: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 26 00:36:29.910: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 26 00:36:29.910: INFO: 	Container weave ready: true, restart count 0
Feb 26 00:36:29.910: INFO: 	Container weave-npc ready: true, restart count 0
Feb 26 00:36:29.910: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.910: INFO: 	Container kube-controller-manager ready: true, restart count 19
Feb 26 00:36:29.910: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.910: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 26 00:36:29.910: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.910: INFO: 	Container kube-scheduler ready: true, restart count 25
Feb 26 00:36:29.910: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.910: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 26 00:36:29.910: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.911: INFO: 	Container etcd ready: true, restart count 1
Feb 26 00:36:29.911: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.911: INFO: 	Container coredns ready: true, restart count 0
Feb 26 00:36:29.911: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 26 00:36:29.911: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-a5b70875-6b52-4eaa-8502-c2f13c81da9e 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-a5b70875-6b52-4eaa-8502-c2f13c81da9e off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-a5b70875-6b52-4eaa-8502-c2f13c81da9e
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:41:54.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6794" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:324.716 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":139,"skipped":2176,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:41:54.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-2ef93fa0-b138-44f9-b38c-365479ebe076
STEP: Creating a pod to test consume secrets
Feb 26 00:41:54.411: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef" in namespace "projected-308" to be "success or failure"
Feb 26 00:41:54.418: INFO: Pod "pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342675ms
Feb 26 00:41:56.427: INFO: Pod "pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015599468s
Feb 26 00:41:58.434: INFO: Pod "pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02286288s
Feb 26 00:42:00.773: INFO: Pod "pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361571675s
Feb 26 00:42:02.778: INFO: Pod "pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.366569409s
Feb 26 00:42:04.784: INFO: Pod "pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.37280761s
STEP: Saw pod success
Feb 26 00:42:04.784: INFO: Pod "pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef" satisfied condition "success or failure"
Feb 26 00:42:04.787: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef container projected-secret-volume-test: 
STEP: delete the pod
Feb 26 00:42:04.837: INFO: Waiting for pod pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef to disappear
Feb 26 00:42:04.858: INFO: Pod pod-projected-secrets-d577e907-fd4d-41cd-a716-07fcc7a3ecef no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:42:04.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-308" for this suite.

• [SLOW TEST:10.583 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":140,"skipped":2228,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:42:04.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9228.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9228.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9228.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9228.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9228.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9228.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 26 00:42:23.280: INFO: DNS probes using dns-9228/dns-test-2789835f-7227-45c9-8203-88c827c51287 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:42:23.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9228" for this suite.

• [SLOW TEST:18.485 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":141,"skipped":2232,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:42:23.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 26 00:42:23.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6855'
Feb 26 00:42:23.641: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 26 00:42:23.641: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Feb 26 00:42:23.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6855'
Feb 26 00:42:24.014: INFO: stderr: ""
Feb 26 00:42:24.014: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:42:24.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6855" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":280,"completed":142,"skipped":2237,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:42:24.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:42:24.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 26 00:42:24.354: INFO: stderr: ""
Feb 26 00:42:24.354: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:42:24.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3155" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":280,"completed":143,"skipped":2244,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:42:24.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 26 00:42:24.556: INFO: Waiting up to 5m0s for pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e" in namespace "emptydir-7922" to be "success or failure"
Feb 26 00:42:24.605: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 48.003256ms
Feb 26 00:42:26.616: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058963675s
Feb 26 00:42:28.623: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066630468s
Feb 26 00:42:30.635: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077920256s
Feb 26 00:42:32.641: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084033642s
Feb 26 00:42:34.656: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.099790009s
Feb 26 00:42:36.663: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.106403287s
Feb 26 00:42:38.692: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.135671166s
STEP: Saw pod success
Feb 26 00:42:38.693: INFO: Pod "pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e" satisfied condition "success or failure"
Feb 26 00:42:38.697: INFO: Trying to get logs from node jerma-node pod pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e container test-container: 
STEP: delete the pod
Feb 26 00:42:38.755: INFO: Waiting for pod pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e to disappear
Feb 26 00:42:38.850: INFO: Pod pod-a3c2d474-9a85-4fa8-b092-64aa0d730c4e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:42:38.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7922" for this suite.

• [SLOW TEST:14.495 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":144,"skipped":2278,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:42:38.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 26 00:42:39.018: INFO: Waiting up to 5m0s for pod "pod-be26f361-021d-4bbf-b9d8-66bac262e0e9" in namespace "emptydir-9683" to be "success or failure"
Feb 26 00:42:39.029: INFO: Pod "pod-be26f361-021d-4bbf-b9d8-66bac262e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.539293ms
Feb 26 00:42:41.039: INFO: Pod "pod-be26f361-021d-4bbf-b9d8-66bac262e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02102643s
Feb 26 00:42:43.058: INFO: Pod "pod-be26f361-021d-4bbf-b9d8-66bac262e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039838663s
Feb 26 00:42:45.064: INFO: Pod "pod-be26f361-021d-4bbf-b9d8-66bac262e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045927547s
Feb 26 00:42:47.070: INFO: Pod "pod-be26f361-021d-4bbf-b9d8-66bac262e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052219961s
Feb 26 00:42:49.075: INFO: Pod "pod-be26f361-021d-4bbf-b9d8-66bac262e0e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057402016s
STEP: Saw pod success
Feb 26 00:42:49.075: INFO: Pod "pod-be26f361-021d-4bbf-b9d8-66bac262e0e9" satisfied condition "success or failure"
Feb 26 00:42:49.079: INFO: Trying to get logs from node jerma-node pod pod-be26f361-021d-4bbf-b9d8-66bac262e0e9 container test-container: 
STEP: delete the pod
Feb 26 00:42:49.491: INFO: Waiting for pod pod-be26f361-021d-4bbf-b9d8-66bac262e0e9 to disappear
Feb 26 00:42:49.503: INFO: Pod pod-be26f361-021d-4bbf-b9d8-66bac262e0e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:42:49.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9683" for this suite.

• [SLOW TEST:10.629 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":145,"skipped":2281,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:42:49.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Feb 26 00:42:49.701: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-563" to be "success or failure"
Feb 26 00:42:49.804: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 103.442055ms
Feb 26 00:42:51.816: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114781785s
Feb 26 00:42:53.837: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135788319s
Feb 26 00:42:55.844: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142851548s
Feb 26 00:42:57.860: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158786512s
Feb 26 00:42:59.868: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166991229s
Feb 26 00:43:01.882: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.181147713s
STEP: Saw pod success
Feb 26 00:43:01.882: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 26 00:43:01.886: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 26 00:43:01.959: INFO: Waiting for pod pod-host-path-test to disappear
Feb 26 00:43:01.967: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:43:01.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-563" for this suite.

• [SLOW TEST:12.454 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":146,"skipped":2317,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:43:01.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6423
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-6423
STEP: creating replication controller externalsvc in namespace services-6423
I0226 00:43:02.454460       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6423, replica count: 2
I0226 00:43:05.506989       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:43:08.508792       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:43:11.509488       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 00:43:14.510145       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb 26 00:43:14.586: INFO: Creating new exec pod
Feb 26 00:43:22.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6423 execpodr562b -- /bin/sh -x -c nslookup clusterip-service'
Feb 26 00:43:23.032: INFO: stderr: "I0226 00:43:22.831839    2178 log.go:172] (0xc000a0e0b0) (0xc000641e00) Create stream\nI0226 00:43:22.832054    2178 log.go:172] (0xc000a0e0b0) (0xc000641e00) Stream added, broadcasting: 1\nI0226 00:43:22.836384    2178 log.go:172] (0xc000a0e0b0) Reply frame received for 1\nI0226 00:43:22.836421    2178 log.go:172] (0xc000a0e0b0) (0xc000641ea0) Create stream\nI0226 00:43:22.836433    2178 log.go:172] (0xc000a0e0b0) (0xc000641ea0) Stream added, broadcasting: 3\nI0226 00:43:22.842492    2178 log.go:172] (0xc000a0e0b0) Reply frame received for 3\nI0226 00:43:22.842542    2178 log.go:172] (0xc000a0e0b0) (0xc000894000) Create stream\nI0226 00:43:22.842610    2178 log.go:172] (0xc000a0e0b0) (0xc000894000) Stream added, broadcasting: 5\nI0226 00:43:22.844503    2178 log.go:172] (0xc000a0e0b0) Reply frame received for 5\nI0226 00:43:22.933067    2178 log.go:172] (0xc000a0e0b0) Data frame received for 5\nI0226 00:43:22.933140    2178 log.go:172] (0xc000894000) (5) Data frame handling\nI0226 00:43:22.933173    2178 log.go:172] (0xc000894000) (5) Data frame sent\n+ nslookup clusterip-service\nI0226 00:43:22.948056    2178 log.go:172] (0xc000a0e0b0) Data frame received for 3\nI0226 00:43:22.948087    2178 log.go:172] (0xc000641ea0) (3) Data frame handling\nI0226 00:43:22.948098    2178 log.go:172] (0xc000641ea0) (3) Data frame sent\nI0226 00:43:22.949821    2178 log.go:172] (0xc000a0e0b0) Data frame received for 3\nI0226 00:43:22.949835    2178 log.go:172] (0xc000641ea0) (3) Data frame handling\nI0226 00:43:22.949847    2178 log.go:172] (0xc000641ea0) (3) Data frame sent\nI0226 00:43:23.022647    2178 log.go:172] (0xc000a0e0b0) Data frame received for 1\nI0226 00:43:23.022770    2178 log.go:172] (0xc000641e00) (1) Data frame handling\nI0226 00:43:23.022811    2178 log.go:172] (0xc000641e00) (1) Data frame sent\nI0226 00:43:23.022873    2178 log.go:172] (0xc000a0e0b0) (0xc000641e00) Stream removed, broadcasting: 1\nI0226 00:43:23.024251    2178 log.go:172] (0xc000a0e0b0) (0xc000894000) Stream removed, broadcasting: 5\nI0226 00:43:23.024499    2178 log.go:172] (0xc000a0e0b0) (0xc000641ea0) Stream removed, broadcasting: 3\nI0226 00:43:23.024587    2178 log.go:172] (0xc000a0e0b0) Go away received\nI0226 00:43:23.024754    2178 log.go:172] (0xc000a0e0b0) (0xc000641e00) Stream removed, broadcasting: 1\nI0226 00:43:23.024772    2178 log.go:172] (0xc000a0e0b0) (0xc000641ea0) Stream removed, broadcasting: 3\nI0226 00:43:23.024865    2178 log.go:172] (0xc000a0e0b0) (0xc000894000) Stream removed, broadcasting: 5\n"
Feb 26 00:43:23.033: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6423.svc.cluster.local\tcanonical name = externalsvc.services-6423.svc.cluster.local.\nName:\texternalsvc.services-6423.svc.cluster.local\nAddress: 10.96.158.63\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-6423, will wait for the garbage collector to delete the pods
Feb 26 00:43:23.096: INFO: Deleting ReplicationController externalsvc took: 6.361293ms
Feb 26 00:43:23.496: INFO: Terminating ReplicationController externalsvc pods took: 400.504654ms
Feb 26 00:43:42.454: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:43:42.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6423" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:40.603 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":147,"skipped":2328,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:43:42.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:43:42.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8041" for this suite.
STEP: Destroying namespace "nspatchtest-ae80a712-767b-4673-b3da-a560867b1df2-2386" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":148,"skipped":2330,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:43:42.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-downwardapi-tdzn
STEP: Creating a pod to test atomic-volume-subpath
Feb 26 00:43:42.973: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-tdzn" in namespace "subpath-3987" to be "success or failure"
Feb 26 00:43:43.004: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Pending", Reason="", readiness=false. Elapsed: 31.055521ms
Feb 26 00:43:45.011: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037869116s
Feb 26 00:43:47.017: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044211458s
Feb 26 00:43:49.074: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100992454s
Feb 26 00:43:51.080: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106592284s
Feb 26 00:43:53.087: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 10.113674817s
Feb 26 00:43:55.613: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 12.640056889s
Feb 26 00:43:57.621: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 14.647968603s
Feb 26 00:43:59.629: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 16.65636064s
Feb 26 00:44:01.645: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 18.672495453s
Feb 26 00:44:03.656: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 20.683138484s
Feb 26 00:44:05.663: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 22.690089198s
Feb 26 00:44:07.670: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 24.697188795s
Feb 26 00:44:09.677: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 26.70395106s
Feb 26 00:44:11.684: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Running", Reason="", readiness=true. Elapsed: 28.711030389s
Feb 26 00:44:13.693: INFO: Pod "pod-subpath-test-downwardapi-tdzn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.720096039s
STEP: Saw pod success
Feb 26 00:44:13.693: INFO: Pod "pod-subpath-test-downwardapi-tdzn" satisfied condition "success or failure"
Feb 26 00:44:13.699: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-tdzn container test-container-subpath-downwardapi-tdzn: 
STEP: delete the pod
Feb 26 00:44:14.093: INFO: Waiting for pod pod-subpath-test-downwardapi-tdzn to disappear
Feb 26 00:44:14.105: INFO: Pod pod-subpath-test-downwardapi-tdzn no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-tdzn
Feb 26 00:44:14.105: INFO: Deleting pod "pod-subpath-test-downwardapi-tdzn" in namespace "subpath-3987"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:44:14.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3987" for this suite.

• [SLOW TEST:31.324 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":149,"skipped":2360,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:44:14.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-37d81748-dbc5-403a-a0f0-60798f02b8e8
STEP: Creating a pod to test consume configMaps
Feb 26 00:44:14.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c" in namespace "projected-6799" to be "success or failure"
Feb 26 00:44:14.434: INFO: Pod "pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c": Phase="Pending", Reason="", readiness=false. Elapsed: 77.470881ms
Feb 26 00:44:16.445: INFO: Pod "pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088819875s
Feb 26 00:44:18.454: INFO: Pod "pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098051087s
Feb 26 00:44:20.466: INFO: Pod "pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109851141s
Feb 26 00:44:22.476: INFO: Pod "pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119582616s
Feb 26 00:44:24.483: INFO: Pod "pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127111673s
STEP: Saw pod success
Feb 26 00:44:24.484: INFO: Pod "pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c" satisfied condition "success or failure"
Feb 26 00:44:24.487: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 00:44:24.566: INFO: Waiting for pod pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c to disappear
Feb 26 00:44:24.584: INFO: Pod pod-projected-configmaps-8661c653-7fa7-43bd-a914-4342885ab00c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:44:24.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6799" for this suite.

• [SLOW TEST:10.482 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":150,"skipped":2365,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:44:24.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 00:44:24.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3" in namespace "downward-api-2753" to be "success or failure"
Feb 26 00:44:24.742: INFO: Pod "downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526173ms
Feb 26 00:44:26.754: INFO: Pod "downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018307718s
Feb 26 00:44:28.764: INFO: Pod "downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028180621s
Feb 26 00:44:30.778: INFO: Pod "downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04210137s
Feb 26 00:44:32.786: INFO: Pod "downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050159511s
STEP: Saw pod success
Feb 26 00:44:32.786: INFO: Pod "downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3" satisfied condition "success or failure"
Feb 26 00:44:32.789: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3 container client-container: 
STEP: delete the pod
Feb 26 00:44:32.862: INFO: Waiting for pod downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3 to disappear
Feb 26 00:44:32.876: INFO: Pod downwardapi-volume-559b2f2f-5256-4a72-b450-6befece010b3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:44:32.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2753" for this suite.

• [SLOW TEST:8.292 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":151,"skipped":2379,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:44:32.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2277.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2277.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 26 00:44:47.187: INFO: DNS probes using dns-2277/dns-test-c01940b0-0eb8-46da-9ee2-700e2fbaa52b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:44:47.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2277" for this suite.

• [SLOW TEST:14.543 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":280,"completed":152,"skipped":2393,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:44:47.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 26 00:44:56.730: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:44:57.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-624" for this suite.

• [SLOW TEST:10.398 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":153,"skipped":2408,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:44:57.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1066
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating stateful set ss in namespace statefulset-1066
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1066
Feb 26 00:44:58.101: INFO: Found 0 stateful pods, waiting for 1
Feb 26 00:45:08.107: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 26 00:45:18.107: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 26 00:45:18.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 26 00:45:20.591: INFO: stderr: "I0226 00:45:20.315895    2194 log.go:172] (0xc0000f51e0) (0xc0008a20a0) Create stream\nI0226 00:45:20.315978    2194 log.go:172] (0xc0000f51e0) (0xc0008a20a0) Stream added, broadcasting: 1\nI0226 00:45:20.321441    2194 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0226 00:45:20.321517    2194 log.go:172] (0xc0000f51e0) (0xc0008a2140) Create stream\nI0226 00:45:20.321538    2194 log.go:172] (0xc0000f51e0) (0xc0008a2140) Stream added, broadcasting: 3\nI0226 00:45:20.323834    2194 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0226 00:45:20.323927    2194 log.go:172] (0xc0000f51e0) (0xc00071a000) Create stream\nI0226 00:45:20.323943    2194 log.go:172] (0xc0000f51e0) (0xc00071a000) Stream added, broadcasting: 5\nI0226 00:45:20.326309    2194 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0226 00:45:20.414267    2194 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0226 00:45:20.414321    2194 log.go:172] (0xc00071a000) (5) Data frame handling\nI0226 00:45:20.414354    2194 log.go:172] (0xc00071a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 00:45:20.451261    2194 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0226 00:45:20.451347    2194 log.go:172] (0xc0008a2140) (3) Data frame handling\nI0226 00:45:20.451376    2194 log.go:172] (0xc0008a2140) (3) Data frame sent\nI0226 00:45:20.566382    2194 log.go:172] (0xc0000f51e0) (0xc0008a2140) Stream removed, broadcasting: 3\nI0226 00:45:20.566769    2194 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0226 00:45:20.566825    2194 log.go:172] (0xc0008a20a0) (1) Data frame handling\nI0226 00:45:20.566857    2194 log.go:172] (0xc0008a20a0) (1) Data frame sent\nI0226 00:45:20.566876    2194 log.go:172] (0xc0000f51e0) (0xc0008a20a0) Stream removed, broadcasting: 1\nI0226 00:45:20.567427    2194 log.go:172] (0xc0000f51e0) (0xc00071a000) Stream removed, broadcasting: 5\nI0226 00:45:20.567565    2194 log.go:172] (0xc0000f51e0) Go away received\nI0226 00:45:20.568702    2194 log.go:172] (0xc0000f51e0) (0xc0008a20a0) Stream removed, broadcasting: 1\nI0226 00:45:20.568728    2194 log.go:172] (0xc0000f51e0) (0xc0008a2140) Stream removed, broadcasting: 3\nI0226 00:45:20.568738    2194 log.go:172] (0xc0000f51e0) (0xc00071a000) Stream removed, broadcasting: 5\n"
Feb 26 00:45:20.592: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 26 00:45:20.592: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 26 00:45:20.601: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 26 00:45:30.619: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 00:45:30.619: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 00:45:30.721: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 26 00:45:30.721: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:45:30.721: INFO: 
Feb 26 00:45:30.721: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 26 00:45:32.333: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.942126433s
Feb 26 00:45:33.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.330152527s
Feb 26 00:45:34.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.785715258s
Feb 26 00:45:35.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.774098886s
Feb 26 00:45:37.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.765124672s
Feb 26 00:45:38.508: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.52950388s
Feb 26 00:45:39.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.155185781s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1066
Feb 26 00:45:40.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:45:41.298: INFO: stderr: "I0226 00:45:41.094738    2226 log.go:172] (0xc000a2adc0) (0xc000a18320) Create stream\nI0226 00:45:41.094874    2226 log.go:172] (0xc000a2adc0) (0xc000a18320) Stream added, broadcasting: 1\nI0226 00:45:41.099283    2226 log.go:172] (0xc000a2adc0) Reply frame received for 1\nI0226 00:45:41.099361    2226 log.go:172] (0xc000a2adc0) (0xc000a640a0) Create stream\nI0226 00:45:41.099374    2226 log.go:172] (0xc000a2adc0) (0xc000a640a0) Stream added, broadcasting: 3\nI0226 00:45:41.109145    2226 log.go:172] (0xc000a2adc0) Reply frame received for 3\nI0226 00:45:41.109184    2226 log.go:172] (0xc000a2adc0) (0xc000a183c0) Create stream\nI0226 00:45:41.109209    2226 log.go:172] (0xc000a2adc0) (0xc000a183c0) Stream added, broadcasting: 5\nI0226 00:45:41.110917    2226 log.go:172] (0xc000a2adc0) Reply frame received for 5\nI0226 00:45:41.191468    2226 log.go:172] (0xc000a2adc0) Data frame received for 3\nI0226 00:45:41.191549    2226 log.go:172] (0xc000a640a0) (3) Data frame handling\nI0226 00:45:41.191579    2226 log.go:172] (0xc000a640a0) (3) Data frame sent\nI0226 00:45:41.191653    2226 log.go:172] (0xc000a2adc0) Data frame received for 5\nI0226 00:45:41.191687    2226 log.go:172] (0xc000a183c0) (5) Data frame handling\nI0226 00:45:41.191705    2226 log.go:172] (0xc000a183c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0226 00:45:41.283745    2226 log.go:172] (0xc000a2adc0) Data frame received for 1\nI0226 00:45:41.284044    2226 log.go:172] (0xc000a18320) (1) Data frame handling\nI0226 00:45:41.284130    2226 log.go:172] (0xc000a18320) (1) Data frame sent\nI0226 00:45:41.284453    2226 log.go:172] (0xc000a2adc0) (0xc000a640a0) Stream removed, broadcasting: 3\nI0226 00:45:41.284642    2226 log.go:172] (0xc000a2adc0) (0xc000a183c0) Stream removed, broadcasting: 5\nI0226 00:45:41.285165    2226 log.go:172] (0xc000a2adc0) (0xc000a18320) Stream removed, broadcasting: 1\nI0226 00:45:41.285203    2226 log.go:172] (0xc000a2adc0) Go away received\nI0226 00:45:41.286587    2226 log.go:172] (0xc000a2adc0) (0xc000a18320) Stream removed, broadcasting: 1\nI0226 00:45:41.286625    2226 log.go:172] (0xc000a2adc0) (0xc000a640a0) Stream removed, broadcasting: 3\nI0226 00:45:41.286635    2226 log.go:172] (0xc000a2adc0) (0xc000a183c0) Stream removed, broadcasting: 5\n"
Feb 26 00:45:41.298: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 26 00:45:41.298: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 26 00:45:41.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:45:41.661: INFO: stderr: "I0226 00:45:41.471226    2246 log.go:172] (0xc000bc14a0) (0xc000c205a0) Create stream\nI0226 00:45:41.471320    2246 log.go:172] (0xc000bc14a0) (0xc000c205a0) Stream added, broadcasting: 1\nI0226 00:45:41.474678    2246 log.go:172] (0xc000bc14a0) Reply frame received for 1\nI0226 00:45:41.474782    2246 log.go:172] (0xc000bc14a0) (0xc000a1a000) Create stream\nI0226 00:45:41.474796    2246 log.go:172] (0xc000bc14a0) (0xc000a1a000) Stream added, broadcasting: 3\nI0226 00:45:41.475812    2246 log.go:172] (0xc000bc14a0) Reply frame received for 3\nI0226 00:45:41.475830    2246 log.go:172] (0xc000bc14a0) (0xc000c20640) Create stream\nI0226 00:45:41.475835    2246 log.go:172] (0xc000bc14a0) (0xc000c20640) Stream added, broadcasting: 5\nI0226 00:45:41.477800    2246 log.go:172] (0xc000bc14a0) Reply frame received for 5\nI0226 00:45:41.536391    2246 log.go:172] (0xc000bc14a0) Data frame received for 5\nI0226 00:45:41.536437    2246 log.go:172] (0xc000c20640) (5) Data frame handling\nI0226 00:45:41.536462    2246 log.go:172] (0xc000c20640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0226 00:45:41.539231    2246 log.go:172] (0xc000bc14a0) Data frame received for 5\nI0226 00:45:41.539321    2246 log.go:172] (0xc000c20640) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0226 00:45:41.539351    2246 log.go:172] (0xc000bc14a0) Data frame received for 3\nI0226 00:45:41.539384    2246 log.go:172] (0xc000a1a000) (3) Data frame handling\nI0226 00:45:41.539401    2246 log.go:172] (0xc000a1a000) (3) Data frame sent\nI0226 00:45:41.539480    2246 log.go:172] (0xc000c20640) (5) Data frame sent\nI0226 00:45:41.539495    2246 log.go:172] (0xc000bc14a0) Data frame received for 5\nI0226 00:45:41.539504    2246 log.go:172] (0xc000c20640) (5) Data frame handling\nI0226 00:45:41.539525    2246 log.go:172] (0xc000c20640) (5) Data frame sent\n+ true\nI0226 00:45:41.646394    2246 log.go:172] (0xc000bc14a0) Data frame received for 1\nI0226 00:45:41.646659    2246 log.go:172] (0xc000c205a0) (1) Data frame handling\nI0226 00:45:41.646764    2246 log.go:172] (0xc000c205a0) (1) Data frame sent\nI0226 00:45:41.646930    2246 log.go:172] (0xc000bc14a0) (0xc000c205a0) Stream removed, broadcasting: 1\nI0226 00:45:41.647824    2246 log.go:172] (0xc000bc14a0) (0xc000a1a000) Stream removed, broadcasting: 3\nI0226 00:45:41.647971    2246 log.go:172] (0xc000bc14a0) (0xc000c20640) Stream removed, broadcasting: 5\nI0226 00:45:41.648033    2246 log.go:172] (0xc000bc14a0) (0xc000c205a0) Stream removed, broadcasting: 1\nI0226 00:45:41.648080    2246 log.go:172] (0xc000bc14a0) (0xc000a1a000) Stream removed, broadcasting: 3\nI0226 00:45:41.648109    2246 log.go:172] (0xc000bc14a0) (0xc000c20640) Stream removed, broadcasting: 5\n"
Feb 26 00:45:41.661: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 26 00:45:41.661: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 26 00:45:41.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:45:41.996: INFO: stderr: "I0226 00:45:41.844879    2266 log.go:172] (0xc0005f69a0) (0xc0009a8140) Create stream\nI0226 00:45:41.845024    2266 log.go:172] (0xc0005f69a0) (0xc0009a8140) Stream added, broadcasting: 1\nI0226 00:45:41.847788    2266 log.go:172] (0xc0005f69a0) Reply frame received for 1\nI0226 00:45:41.847840    2266 log.go:172] (0xc0005f69a0) (0xc0006a3c20) Create stream\nI0226 00:45:41.847850    2266 log.go:172] (0xc0005f69a0) (0xc0006a3c20) Stream added, broadcasting: 3\nI0226 00:45:41.848831    2266 log.go:172] (0xc0005f69a0) Reply frame received for 3\nI0226 00:45:41.848854    2266 log.go:172] (0xc0005f69a0) (0xc0006a3e00) Create stream\nI0226 00:45:41.848868    2266 log.go:172] (0xc0005f69a0) (0xc0006a3e00) Stream added, broadcasting: 5\nI0226 00:45:41.850242    2266 log.go:172] (0xc0005f69a0) Reply frame received for 5\nI0226 00:45:41.908148    2266 log.go:172] (0xc0005f69a0) Data frame received for 3\nI0226 00:45:41.908301    2266 log.go:172] (0xc0006a3c20) (3) Data frame handling\nI0226 00:45:41.908320    2266 log.go:172] (0xc0006a3c20) (3) Data frame sent\nI0226 00:45:41.908346    2266 log.go:172] (0xc0005f69a0) Data frame received for 5\nI0226 00:45:41.908362    2266 log.go:172] (0xc0006a3e00) (5) Data frame handling\nI0226 00:45:41.908375    2266 log.go:172] (0xc0006a3e00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0226 00:45:41.988259    2266 log.go:172] (0xc0005f69a0) Data frame received for 1\nI0226 00:45:41.988347    2266 log.go:172] (0xc0005f69a0) (0xc0006a3c20) Stream removed, broadcasting: 3\nI0226 00:45:41.988409    2266 log.go:172] (0xc0009a8140) (1) Data frame handling\nI0226 00:45:41.988449    2266 log.go:172] (0xc0009a8140) (1) Data frame sent\nI0226 00:45:41.988481    2266 log.go:172] (0xc0005f69a0) (0xc0006a3e00) Stream removed, broadcasting: 5\nI0226 00:45:41.988521    2266 log.go:172] (0xc0005f69a0) (0xc0009a8140) Stream removed, broadcasting: 1\nI0226 00:45:41.988549    2266 log.go:172] (0xc0005f69a0) Go away received\nI0226 00:45:41.989066    2266 log.go:172] (0xc0005f69a0) (0xc0009a8140) Stream removed, broadcasting: 1\nI0226 00:45:41.989080    2266 log.go:172] (0xc0005f69a0) (0xc0006a3c20) Stream removed, broadcasting: 3\nI0226 00:45:41.989086    2266 log.go:172] (0xc0005f69a0) (0xc0006a3e00) Stream removed, broadcasting: 5\n"
Feb 26 00:45:41.997: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 26 00:45:41.997: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 26 00:45:42.003: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 00:45:42.003: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Feb 26 00:45:52.012: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 00:45:52.012: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 00:45:52.012: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 26 00:45:52.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 26 00:45:52.443: INFO: stderr: "I0226 00:45:52.256013    2287 log.go:172] (0xc0003c6f20) (0xc000892140) Create stream\nI0226 00:45:52.256164    2287 log.go:172] (0xc0003c6f20) (0xc000892140) Stream added, broadcasting: 1\nI0226 00:45:52.258745    2287 log.go:172] (0xc0003c6f20) Reply frame received for 1\nI0226 00:45:52.258798    2287 log.go:172] (0xc0003c6f20) (0xc000718000) Create stream\nI0226 00:45:52.258808    2287 log.go:172] (0xc0003c6f20) (0xc000718000) Stream added, broadcasting: 3\nI0226 00:45:52.263214    2287 log.go:172] (0xc0003c6f20) Reply frame received for 3\nI0226 00:45:52.263228    2287 log.go:172] (0xc0003c6f20) (0xc000718140) Create stream\nI0226 00:45:52.263234    2287 log.go:172] (0xc0003c6f20) (0xc000718140) Stream added, broadcasting: 5\nI0226 00:45:52.264457    2287 log.go:172] (0xc0003c6f20) Reply frame received for 5\nI0226 00:45:52.366177    2287 log.go:172] (0xc0003c6f20) Data frame received for 3\nI0226 00:45:52.366302    2287 log.go:172] (0xc000718000) (3) Data frame handling\nI0226 00:45:52.366343    2287 log.go:172] (0xc000718000) (3) Data frame sent\nI0226 00:45:52.366432    2287 log.go:172] (0xc0003c6f20) Data frame received for 5\nI0226 00:45:52.366453    2287 log.go:172] (0xc000718140) (5) Data frame handling\nI0226 00:45:52.366483    2287 log.go:172] (0xc000718140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 00:45:52.429456    2287 log.go:172] (0xc0003c6f20) Data frame received for 1\nI0226 00:45:52.429637    2287 log.go:172] (0xc0003c6f20) (0xc000718140) Stream removed, broadcasting: 5\nI0226 00:45:52.429727    2287 log.go:172] (0xc000892140) (1) Data frame handling\nI0226 00:45:52.429769    2287 log.go:172] (0xc000892140) (1) Data frame sent\nI0226 00:45:52.429982    2287 log.go:172] (0xc0003c6f20) (0xc000718000) Stream removed, broadcasting: 3\nI0226 00:45:52.430044    2287 log.go:172] (0xc0003c6f20) (0xc000892140) Stream removed, broadcasting: 1\nI0226 00:45:52.430085    2287 log.go:172] (0xc0003c6f20) Go away received\nI0226 00:45:52.432109    2287 log.go:172] (0xc0003c6f20) (0xc000892140) Stream removed, broadcasting: 1\nI0226 00:45:52.432143    2287 log.go:172] (0xc0003c6f20) (0xc000718000) Stream removed, broadcasting: 3\nI0226 00:45:52.432155    2287 log.go:172] (0xc0003c6f20) (0xc000718140) Stream removed, broadcasting: 5\n"
Feb 26 00:45:52.444: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 26 00:45:52.444: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 26 00:45:52.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 26 00:45:53.046: INFO: stderr: "I0226 00:45:52.712373    2308 log.go:172] (0xc0000202c0) (0xc00041b540) Create stream\nI0226 00:45:52.712609    2308 log.go:172] (0xc0000202c0) (0xc00041b540) Stream added, broadcasting: 1\nI0226 00:45:52.715979    2308 log.go:172] (0xc0000202c0) Reply frame received for 1\nI0226 00:45:52.716034    2308 log.go:172] (0xc0000202c0) (0xc0009ae000) Create stream\nI0226 00:45:52.716045    2308 log.go:172] (0xc0000202c0) (0xc0009ae000) Stream added, broadcasting: 3\nI0226 00:45:52.717510    2308 log.go:172] (0xc0000202c0) Reply frame received for 3\nI0226 00:45:52.717553    2308 log.go:172] (0xc0000202c0) (0xc000627c20) Create stream\nI0226 00:45:52.717571    2308 log.go:172] (0xc0000202c0) (0xc000627c20) Stream added, broadcasting: 5\nI0226 00:45:52.718825    2308 log.go:172] (0xc0000202c0) Reply frame received for 5\nI0226 00:45:52.826788    2308 log.go:172] (0xc0000202c0) Data frame received for 5\nI0226 00:45:52.826843    2308 log.go:172] (0xc000627c20) (5) Data frame handling\nI0226 00:45:52.826863    2308 log.go:172] (0xc000627c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 00:45:52.931964    2308 log.go:172] (0xc0000202c0) Data frame received for 3\nI0226 00:45:52.932545    2308 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0226 00:45:52.932598    2308 log.go:172] (0xc0009ae000) (3) Data frame sent\nI0226 00:45:53.036597    2308 log.go:172] (0xc0000202c0) Data frame received for 1\nI0226 00:45:53.036731    2308 log.go:172] (0xc0000202c0) (0xc000627c20) Stream removed, broadcasting: 5\nI0226 00:45:53.036867    2308 log.go:172] (0xc00041b540) (1) Data frame handling\nI0226 00:45:53.036915    2308 log.go:172] (0xc00041b540) (1) Data frame sent\nI0226 00:45:53.036949    2308 log.go:172] (0xc0000202c0) (0xc0009ae000) Stream removed, broadcasting: 3\nI0226 00:45:53.036981    2308 log.go:172] (0xc0000202c0) (0xc00041b540) Stream removed, broadcasting: 1\nI0226 00:45:53.036997    2308 log.go:172] (0xc0000202c0) Go away received\nI0226 00:45:53.037913    2308 log.go:172] (0xc0000202c0) (0xc00041b540) Stream removed, broadcasting: 1\nI0226 00:45:53.037955    2308 log.go:172] (0xc0000202c0) (0xc0009ae000) Stream removed, broadcasting: 3\nI0226 00:45:53.037975    2308 log.go:172] (0xc0000202c0) (0xc000627c20) Stream removed, broadcasting: 5\n"
Feb 26 00:45:53.047: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 26 00:45:53.047: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 26 00:45:53.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 26 00:45:53.404: INFO: stderr: "I0226 00:45:53.173231    2331 log.go:172] (0xc0009fd080) (0xc000ba61e0) Create stream\nI0226 00:45:53.173318    2331 log.go:172] (0xc0009fd080) (0xc000ba61e0) Stream added, broadcasting: 1\nI0226 00:45:53.176136    2331 log.go:172] (0xc0009fd080) Reply frame received for 1\nI0226 00:45:53.176199    2331 log.go:172] (0xc0009fd080) (0xc0009e6280) Create stream\nI0226 00:45:53.176208    2331 log.go:172] (0xc0009fd080) (0xc0009e6280) Stream added, broadcasting: 3\nI0226 00:45:53.177151    2331 log.go:172] (0xc0009fd080) Reply frame received for 3\nI0226 00:45:53.177218    2331 log.go:172] (0xc0009fd080) (0xc000ba6280) Create stream\nI0226 00:45:53.177229    2331 log.go:172] (0xc0009fd080) (0xc000ba6280) Stream added, broadcasting: 5\nI0226 00:45:53.178194    2331 log.go:172] (0xc0009fd080) Reply frame received for 5\nI0226 00:45:53.246502    2331 log.go:172] (0xc0009fd080) Data frame received for 5\nI0226 00:45:53.246691    2331 log.go:172] (0xc000ba6280) (5) Data frame handling\nI0226 00:45:53.246732    2331 log.go:172] (0xc000ba6280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 00:45:53.292731    2331 log.go:172] (0xc0009fd080) Data frame received for 3\nI0226 00:45:53.292849    2331 log.go:172] (0xc0009e6280) (3) Data frame handling\nI0226 00:45:53.292904    2331 log.go:172] (0xc0009e6280) (3) Data frame sent\nI0226 00:45:53.392120    2331 log.go:172] (0xc0009fd080) Data frame received for 1\nI0226 00:45:53.392526    2331 log.go:172] (0xc0009fd080) (0xc0009e6280) Stream removed, broadcasting: 3\nI0226 00:45:53.392631    2331 log.go:172] (0xc000ba61e0) (1) Data frame handling\nI0226 00:45:53.392722    2331 log.go:172] (0xc000ba61e0) (1) Data frame sent\nI0226 00:45:53.392904    2331 log.go:172] (0xc0009fd080) (0xc000ba6280) Stream removed, broadcasting: 5\nI0226 00:45:53.393055    2331 log.go:172] (0xc0009fd080) (0xc000ba61e0) Stream removed, broadcasting: 1\nI0226 00:45:53.393107    2331 log.go:172] (0xc0009fd080) Go away received\nI0226 00:45:53.394797    2331 log.go:172] (0xc0009fd080) (0xc000ba61e0) Stream removed, broadcasting: 1\nI0226 00:45:53.394812    2331 log.go:172] (0xc0009fd080) (0xc0009e6280) Stream removed, broadcasting: 3\nI0226 00:45:53.394823    2331 log.go:172] (0xc0009fd080) (0xc000ba6280) Stream removed, broadcasting: 5\n"
Feb 26 00:45:53.404: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 26 00:45:53.404: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 26 00:45:53.404: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 00:45:53.409: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 26 00:46:03.420: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 00:46:03.420: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 00:46:03.420: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 00:46:03.444: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 26 00:46:03.444: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:46:03.444: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:03.444: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:03.444: INFO: 
Feb 26 00:46:03.444: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 00:46:05.000: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 26 00:46:05.000: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:46:05.000: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:05.000: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:05.000: INFO: 
Feb 26 00:46:05.000: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 00:46:06.008: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 26 00:46:06.008: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:46:06.008: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:06.008: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:06.008: INFO: 
Feb 26 00:46:06.008: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 00:46:07.015: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 26 00:46:07.015: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:46:07.016: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:07.016: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:07.016: INFO: 
Feb 26 00:46:07.016: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 00:46:08.027: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 26 00:46:08.027: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:46:08.027: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:08.027: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:08.027: INFO: 
Feb 26 00:46:08.027: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 00:46:09.033: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 26 00:46:09.033: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:46:09.034: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:09.034: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:09.034: INFO: 
Feb 26 00:46:09.034: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 00:46:10.086: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 26 00:46:10.086: INFO: ss-0  jerma-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:46:10.086: INFO: ss-2  jerma-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:10.086: INFO: 
Feb 26 00:46:10.086: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 26 00:46:11.092: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 26 00:46:11.092: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:44:58 +0000 UTC  }]
Feb 26 00:46:11.093: INFO: ss-2  jerma-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:11.093: INFO: 
Feb 26 00:46:11.093: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 26 00:46:12.100: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 26 00:46:12.100: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:12.100: INFO: 
Feb 26 00:46:12.100: INFO: StatefulSet ss has not reached scale 0, at 1
Feb 26 00:46:13.107: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 26 00:46:13.107: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 00:45:30 +0000 UTC  }]
Feb 26 00:46:13.107: INFO: 
Feb 26 00:46:13.107: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1066
Feb 26 00:46:14.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:46:14.279: INFO: rc: 1
Feb 26 00:46:14.279: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Feb 26 00:46:24.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:46:24.384: INFO: rc: 1
Feb 26 00:46:24.384: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:46:34.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:46:34.565: INFO: rc: 1
Feb 26 00:46:34.565: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:46:44.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:46:44.701: INFO: rc: 1
Feb 26 00:46:44.701: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:46:54.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:46:54.840: INFO: rc: 1
Feb 26 00:46:54.840: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:47:04.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:47:05.008: INFO: rc: 1
Feb 26 00:47:05.008: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:47:15.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:47:15.179: INFO: rc: 1
Feb 26 00:47:15.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:47:25.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:47:25.375: INFO: rc: 1
Feb 26 00:47:25.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:47:35.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:47:35.524: INFO: rc: 1
Feb 26 00:47:35.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:47:45.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:47:45.679: INFO: rc: 1
Feb 26 00:47:45.680: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:47:55.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:47:55.863: INFO: rc: 1
Feb 26 00:47:55.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:48:05.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:48:06.016: INFO: rc: 1
Feb 26 00:48:06.016: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:48:16.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:48:16.171: INFO: rc: 1
Feb 26 00:48:16.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:48:26.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:48:26.370: INFO: rc: 1
Feb 26 00:48:26.371: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:48:36.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:48:36.527: INFO: rc: 1
Feb 26 00:48:36.527: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:48:46.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:48:46.683: INFO: rc: 1
Feb 26 00:48:46.684: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:48:56.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:48:56.863: INFO: rc: 1
Feb 26 00:48:56.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:49:06.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:49:07.010: INFO: rc: 1
Feb 26 00:49:07.010: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:49:17.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:49:17.256: INFO: rc: 1
Feb 26 00:49:17.256: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:49:27.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:49:27.505: INFO: rc: 1
Feb 26 00:49:27.506: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:49:37.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:49:37.651: INFO: rc: 1
Feb 26 00:49:37.651: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:49:47.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:49:47.756: INFO: rc: 1
Feb 26 00:49:47.756: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:49:57.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:49:57.885: INFO: rc: 1
Feb 26 00:49:57.885: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:50:07.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:50:08.032: INFO: rc: 1
Feb 26 00:50:08.032: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:50:18.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:50:18.147: INFO: rc: 1
Feb 26 00:50:18.147: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:50:28.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:50:28.271: INFO: rc: 1
Feb 26 00:50:28.271: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:50:38.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:50:38.420: INFO: rc: 1
Feb 26 00:50:38.420: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:50:48.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:50:48.532: INFO: rc: 1
Feb 26 00:50:48.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:50:58.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:50:58.679: INFO: rc: 1
Feb 26 00:50:58.679: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:51:08.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:51:08.809: INFO: rc: 1
Feb 26 00:51:08.809: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Feb 26 00:51:18.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1066 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 00:51:19.085: INFO: rc: 1
Feb 26 00:51:19.085: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Feb 26 00:51:19.085: INFO: Scaling statefulset ss to 0
Feb 26 00:51:19.096: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 26 00:51:19.098: INFO: Deleting all statefulset in ns statefulset-1066
Feb 26 00:51:19.100: INFO: Scaling statefulset ss to 0
Feb 26 00:51:19.105: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 00:51:19.107: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:51:19.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1066" for this suite.

• [SLOW TEST:381.459 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":154,"skipped":2412,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:51:19.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb 26 00:51:30.081: INFO: Successfully updated pod "adopt-release-6cvzp"
STEP: Checking that the Job readopts the Pod
Feb 26 00:51:30.081: INFO: Waiting up to 15m0s for pod "adopt-release-6cvzp" in namespace "job-2535" to be "adopted"
Feb 26 00:51:30.094: INFO: Pod "adopt-release-6cvzp": Phase="Running", Reason="", readiness=true. Elapsed: 12.925536ms
Feb 26 00:51:32.100: INFO: Pod "adopt-release-6cvzp": Phase="Running", Reason="", readiness=true. Elapsed: 2.01886358s
Feb 26 00:51:32.100: INFO: Pod "adopt-release-6cvzp" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb 26 00:51:32.623: INFO: Successfully updated pod "adopt-release-6cvzp"
STEP: Checking that the Job releases the Pod
Feb 26 00:51:32.623: INFO: Waiting up to 15m0s for pod "adopt-release-6cvzp" in namespace "job-2535" to be "released"
Feb 26 00:51:32.634: INFO: Pod "adopt-release-6cvzp": Phase="Running", Reason="", readiness=true. Elapsed: 10.975607ms
Feb 26 00:51:34.641: INFO: Pod "adopt-release-6cvzp": Phase="Running", Reason="", readiness=true. Elapsed: 2.017603131s
Feb 26 00:51:34.641: INFO: Pod "adopt-release-6cvzp" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:51:34.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2535" for this suite.

• [SLOW TEST:15.359 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":155,"skipped":2437,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:51:34.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:51:41.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4979" for this suite.

• [SLOW TEST:7.277 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":156,"skipped":2448,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:51:41.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 26 00:51:50.136: INFO: &Pod{ObjectMeta:{send-events-6e84af95-62a6-41cf-bbe5-dda1a78e92cf  events-5137 /api/v1/namespaces/events-5137/pods/send-events-6e84af95-62a6-41cf-bbe5-dda1a78e92cf 4b9f84e6-6730-41f7-964e-240e848729c9 10771296 0 2020-02-26 00:51:42 +0000 UTC   map[name:foo time:61571862] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rwns2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rwns2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rwns2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:51:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 00:51:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-26 00:51:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 00:51:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://89a9b96c6d5637e4acf7eb4dc593ce3efc946155b4458a456310bf177ddc71cb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb 26 00:51:52.146: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 26 00:51:54.152: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:51:54.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5137" for this suite.

• [SLOW TEST:12.309 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":280,"completed":157,"skipped":2473,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:51:54.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-29vv
STEP: Creating a pod to test atomic-volume-subpath
Feb 26 00:51:54.433: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-29vv" in namespace "subpath-4877" to be "success or failure"
Feb 26 00:51:54.457: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Pending", Reason="", readiness=false. Elapsed: 23.890945ms
Feb 26 00:51:56.870: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.43687362s
Feb 26 00:51:58.882: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44840621s
Feb 26 00:52:01.298: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.864827443s
Feb 26 00:52:03.316: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 8.88286806s
Feb 26 00:52:05.328: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 10.894756986s
Feb 26 00:52:07.336: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 12.903093376s
Feb 26 00:52:09.349: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 14.915604868s
Feb 26 00:52:11.358: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 16.924806688s
Feb 26 00:52:13.374: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 18.940626312s
Feb 26 00:52:15.384: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 20.950304701s
Feb 26 00:52:17.394: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 22.960930275s
Feb 26 00:52:19.401: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 24.967237048s
Feb 26 00:52:21.409: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 26.975824044s
Feb 26 00:52:23.444: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Running", Reason="", readiness=true. Elapsed: 29.010175495s
Feb 26 00:52:25.573: INFO: Pod "pod-subpath-test-projected-29vv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.139635526s
STEP: Saw pod success
Feb 26 00:52:25.573: INFO: Pod "pod-subpath-test-projected-29vv" satisfied condition "success or failure"
Feb 26 00:52:25.596: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-subpath-test-projected-29vv container test-container-subpath-projected-29vv: 
STEP: delete the pod
Feb 26 00:52:25.844: INFO: Waiting for pod pod-subpath-test-projected-29vv to disappear
Feb 26 00:52:25.851: INFO: Pod pod-subpath-test-projected-29vv no longer exists
STEP: Deleting pod pod-subpath-test-projected-29vv
Feb 26 00:52:25.851: INFO: Deleting pod "pod-subpath-test-projected-29vv" in namespace "subpath-4877"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:52:25.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4877" for this suite.

• [SLOW TEST:31.640 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":158,"skipped":2481,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:52:25.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:52:34.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8633" for this suite.

• [SLOW TEST:8.433 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":159,"skipped":2481,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:52:34.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:53:27.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8380" for this suite.

• [SLOW TEST:53.223 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":160,"skipped":2497,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:53:27.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 26 00:53:27.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3867'
Feb 26 00:53:28.032: INFO: stderr: ""
Feb 26 00:53:28.032: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 00:53:28.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3867'
Feb 26 00:53:28.400: INFO: stderr: ""
Feb 26 00:53:28.400: INFO: stdout: "update-demo-nautilus-4dh4r update-demo-nautilus-l42qb "
Feb 26 00:53:28.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dh4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:28.645: INFO: stderr: ""
Feb 26 00:53:28.645: INFO: stdout: ""
Feb 26 00:53:28.645: INFO: update-demo-nautilus-4dh4r is created but not running
Feb 26 00:53:33.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3867'
Feb 26 00:53:33.799: INFO: stderr: ""
Feb 26 00:53:33.800: INFO: stdout: "update-demo-nautilus-4dh4r update-demo-nautilus-l42qb "
Feb 26 00:53:33.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dh4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:34.040: INFO: stderr: ""
Feb 26 00:53:34.040: INFO: stdout: ""
Feb 26 00:53:34.040: INFO: update-demo-nautilus-4dh4r is created but not running
Feb 26 00:53:39.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3867'
Feb 26 00:53:39.258: INFO: stderr: ""
Feb 26 00:53:39.259: INFO: stdout: "update-demo-nautilus-4dh4r update-demo-nautilus-l42qb "
Feb 26 00:53:39.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dh4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:39.406: INFO: stderr: ""
Feb 26 00:53:39.407: INFO: stdout: "true"
Feb 26 00:53:39.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dh4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:39.485: INFO: stderr: ""
Feb 26 00:53:39.485: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 00:53:39.485: INFO: validating pod update-demo-nautilus-4dh4r
Feb 26 00:53:39.495: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 00:53:39.495: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 00:53:39.495: INFO: update-demo-nautilus-4dh4r is verified up and running
Feb 26 00:53:39.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l42qb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:39.631: INFO: stderr: ""
Feb 26 00:53:39.631: INFO: stdout: ""
Feb 26 00:53:39.631: INFO: update-demo-nautilus-l42qb is created but not running
Feb 26 00:53:44.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3867'
Feb 26 00:53:44.765: INFO: stderr: ""
Feb 26 00:53:44.765: INFO: stdout: "update-demo-nautilus-4dh4r update-demo-nautilus-l42qb "
Feb 26 00:53:44.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dh4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:44.923: INFO: stderr: ""
Feb 26 00:53:44.923: INFO: stdout: "true"
Feb 26 00:53:44.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dh4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:45.032: INFO: stderr: ""
Feb 26 00:53:45.032: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 00:53:45.032: INFO: validating pod update-demo-nautilus-4dh4r
Feb 26 00:53:45.039: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 00:53:45.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 00:53:45.039: INFO: update-demo-nautilus-4dh4r is verified up and running
Feb 26 00:53:45.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l42qb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:45.160: INFO: stderr: ""
Feb 26 00:53:45.161: INFO: stdout: "true"
Feb 26 00:53:45.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l42qb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3867'
Feb 26 00:53:45.289: INFO: stderr: ""
Feb 26 00:53:45.290: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 00:53:45.290: INFO: validating pod update-demo-nautilus-l42qb
Feb 26 00:53:45.310: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 00:53:45.310: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 00:53:45.310: INFO: update-demo-nautilus-l42qb is verified up and running
STEP: using delete to clean up resources
Feb 26 00:53:45.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3867'
Feb 26 00:53:45.448: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 26 00:53:45.448: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 26 00:53:45.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3867'
Feb 26 00:53:45.572: INFO: stderr: "No resources found in kubectl-3867 namespace.\n"
Feb 26 00:53:45.572: INFO: stdout: ""
Feb 26 00:53:45.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3867 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 00:53:45.651: INFO: stderr: ""
Feb 26 00:53:45.651: INFO: stdout: "update-demo-nautilus-4dh4r\nupdate-demo-nautilus-l42qb\n"
Feb 26 00:53:46.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3867'
Feb 26 00:53:46.255: INFO: stderr: "No resources found in kubectl-3867 namespace.\n"
Feb 26 00:53:46.255: INFO: stdout: ""
Feb 26 00:53:46.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3867 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 00:53:46.371: INFO: stderr: ""
Feb 26 00:53:46.371: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:53:46.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3867" for this suite.

• [SLOW TEST:21.988 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":280,"completed":161,"skipped":2510,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:53:49.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 26 00:53:49.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9366'
Feb 26 00:53:49.959: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 26 00:53:49.960: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb 26 00:53:50.309: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-rf5nd]
Feb 26 00:53:50.310: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-rf5nd" in namespace "kubectl-9366" to be "running and ready"
Feb 26 00:53:50.338: INFO: Pod "e2e-test-httpd-rc-rf5nd": Phase="Pending", Reason="", readiness=false. Elapsed: 27.748185ms
Feb 26 00:53:52.532: INFO: Pod "e2e-test-httpd-rc-rf5nd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221773698s
Feb 26 00:53:54.563: INFO: Pod "e2e-test-httpd-rc-rf5nd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253200135s
Feb 26 00:53:56.579: INFO: Pod "e2e-test-httpd-rc-rf5nd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269129992s
Feb 26 00:53:58.592: INFO: Pod "e2e-test-httpd-rc-rf5nd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.282373925s
Feb 26 00:54:00.636: INFO: Pod "e2e-test-httpd-rc-rf5nd": Phase="Running", Reason="", readiness=true. Elapsed: 10.325887017s
Feb 26 00:54:00.636: INFO: Pod "e2e-test-httpd-rc-rf5nd" satisfied condition "running and ready"
Feb 26 00:54:00.636: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-rf5nd]
Feb 26 00:54:00.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9366'
Feb 26 00:54:00.920: INFO: stderr: ""
Feb 26 00:54:00.921: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Wed Feb 26 00:53:58.001230 2020] [mpm_event:notice] [pid 1:tid 140638927993704] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Feb 26 00:53:58.001290 2020] [core:notice] [pid 1:tid 140638927993704] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639
Feb 26 00:54:00.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9366'
Feb 26 00:54:01.102: INFO: stderr: ""
Feb 26 00:54:01.102: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:54:01.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9366" for this suite.

• [SLOW TEST:11.607 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":280,"completed":162,"skipped":2512,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:54:01.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:54:12.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7511" for this suite.

• [SLOW TEST:11.122 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":163,"skipped":2514,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:54:12.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Feb 26 00:54:12.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7909'
Feb 26 00:54:12.652: INFO: stderr: ""
Feb 26 00:54:12.652: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 26 00:54:13.662: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:13.663: INFO: Found 0 / 1
Feb 26 00:54:14.663: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:14.663: INFO: Found 0 / 1
Feb 26 00:54:15.660: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:15.661: INFO: Found 0 / 1
Feb 26 00:54:16.657: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:16.657: INFO: Found 0 / 1
Feb 26 00:54:17.658: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:17.658: INFO: Found 0 / 1
Feb 26 00:54:18.658: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:18.658: INFO: Found 0 / 1
Feb 26 00:54:19.659: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:19.660: INFO: Found 0 / 1
Feb 26 00:54:20.659: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:20.659: INFO: Found 0 / 1
Feb 26 00:54:21.660: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:21.660: INFO: Found 0 / 1
Feb 26 00:54:22.672: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:22.673: INFO: Found 1 / 1
Feb 26 00:54:22.673: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 26 00:54:22.681: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:22.681: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 26 00:54:22.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-mrdc5 --namespace=kubectl-7909 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 26 00:54:22.797: INFO: stderr: ""
Feb 26 00:54:22.797: INFO: stdout: "pod/agnhost-master-mrdc5 patched\n"
STEP: checking annotations
Feb 26 00:54:22.857: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 26 00:54:22.857: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:54:22.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7909" for this suite.

• [SLOW TEST:10.652 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":280,"completed":164,"skipped":2524,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:54:22.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 26 00:54:23.557: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 26 00:54:25.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 00:54:27.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 00:54:29.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275263, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 26 00:54:32.645: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:54:32.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9905-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:54:33.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5634" for this suite.
STEP: Destroying namespace "webhook-5634-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.174 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":165,"skipped":2528,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:54:34.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 26 00:54:34.192: INFO: Waiting up to 5m0s for pod "pod-b259e119-99df-4d2e-b234-e16f9af99ab3" in namespace "emptydir-4525" to be "success or failure"
Feb 26 00:54:34.205: INFO: Pod "pod-b259e119-99df-4d2e-b234-e16f9af99ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.666924ms
Feb 26 00:54:36.217: INFO: Pod "pod-b259e119-99df-4d2e-b234-e16f9af99ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02383431s
Feb 26 00:54:38.222: INFO: Pod "pod-b259e119-99df-4d2e-b234-e16f9af99ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029443708s
Feb 26 00:54:40.228: INFO: Pod "pod-b259e119-99df-4d2e-b234-e16f9af99ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034959595s
Feb 26 00:54:42.236: INFO: Pod "pod-b259e119-99df-4d2e-b234-e16f9af99ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04382419s
Feb 26 00:54:44.266: INFO: Pod "pod-b259e119-99df-4d2e-b234-e16f9af99ab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072863036s
STEP: Saw pod success
Feb 26 00:54:44.266: INFO: Pod "pod-b259e119-99df-4d2e-b234-e16f9af99ab3" satisfied condition "success or failure"
Feb 26 00:54:44.269: INFO: Trying to get logs from node jerma-node pod pod-b259e119-99df-4d2e-b234-e16f9af99ab3 container test-container: 
STEP: delete the pod
Feb 26 00:54:44.331: INFO: Waiting for pod pod-b259e119-99df-4d2e-b234-e16f9af99ab3 to disappear
Feb 26 00:54:44.337: INFO: Pod pod-b259e119-99df-4d2e-b234-e16f9af99ab3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:54:44.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4525" for this suite.

• [SLOW TEST:10.263 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":166,"skipped":2529,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:54:44.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:55:44.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9461" for this suite.

• [SLOW TEST:60.176 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":167,"skipped":2531,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:55:44.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 26 00:55:44.655: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 26 00:55:44.667: INFO: Waiting for terminating namespaces to be deleted...
Feb 26 00:55:44.669: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 26 00:55:44.680: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.680: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 26 00:55:44.680: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 26 00:55:44.680: INFO: 	Container weave ready: true, restart count 1
Feb 26 00:55:44.680: INFO: 	Container weave-npc ready: true, restart count 0
Feb 26 00:55:44.680: INFO: test-webserver-a09765d3-e2ac-4ceb-8890-42f3ba61ddd7 from container-probe-9461 started at 2020-02-26 00:54:44 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.680: INFO: 	Container test-webserver ready: false, restart count 0
Feb 26 00:55:44.680: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 26 00:55:44.746: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 26 00:55:44.746: INFO: 	Container weave ready: true, restart count 0
Feb 26 00:55:44.746: INFO: 	Container weave-npc ready: true, restart count 0
Feb 26 00:55:44.746: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.746: INFO: 	Container kube-controller-manager ready: true, restart count 19
Feb 26 00:55:44.746: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.746: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 26 00:55:44.746: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.746: INFO: 	Container kube-scheduler ready: true, restart count 25
Feb 26 00:55:44.746: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.746: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 26 00:55:44.746: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.746: INFO: 	Container etcd ready: true, restart count 1
Feb 26 00:55:44.746: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.746: INFO: 	Container coredns ready: true, restart count 0
Feb 26 00:55:44.746: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 26 00:55:44.746: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-10d3e88a-04dc-4824-8e2d-f1e030928ea7 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-10d3e88a-04dc-4824-8e2d-f1e030928ea7 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-10d3e88a-04dc-4824-8e2d-f1e030928ea7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:56:21.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6834" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:36.537 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":168,"skipped":2540,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:56:21.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-02a548b3-77b2-4d50-908d-a547270ffcea
STEP: Creating a pod to test consume configMaps
Feb 26 00:56:21.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6" in namespace "configmap-3462" to be "success or failure"
Feb 26 00:56:21.243: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6": Phase="Pending", Reason="", readiness=false. Elapsed: 47.602189ms
Feb 26 00:56:23.251: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055511784s
Feb 26 00:56:25.257: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061707899s
Feb 26 00:56:27.263: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068215469s
Feb 26 00:56:29.271: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075561326s
Feb 26 00:56:31.277: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081902793s
Feb 26 00:56:33.286: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.090720038s
Feb 26 00:56:35.295: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.099821362s
STEP: Saw pod success
Feb 26 00:56:35.295: INFO: Pod "pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6" satisfied condition "success or failure"
Feb 26 00:56:35.304: INFO: Trying to get logs from node jerma-node pod pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6 container configmap-volume-test: 
STEP: delete the pod
Feb 26 00:56:35.452: INFO: Waiting for pod pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6 to disappear
Feb 26 00:56:35.475: INFO: Pod pod-configmaps-158cb279-7ad4-49be-8f73-2ab6df9a31d6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:56:35.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3462" for this suite.

• [SLOW TEST:14.434 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":169,"skipped":2578,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:56:35.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:56:35.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:56:44.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2050" for this suite.

• [SLOW TEST:8.568 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":170,"skipped":2591,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:56:44.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-f25c9c7f-a062-488c-8b59-004af2f78bf8
STEP: Creating a pod to test consume configMaps
Feb 26 00:56:44.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8" in namespace "configmap-6329" to be "success or failure"
Feb 26 00:56:44.255: INFO: Pod "pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.318538ms
Feb 26 00:56:46.263: INFO: Pod "pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03073645s
Feb 26 00:56:48.271: INFO: Pod "pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039037179s
Feb 26 00:56:50.279: INFO: Pod "pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04708828s
Feb 26 00:56:52.290: INFO: Pod "pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058371547s
STEP: Saw pod success
Feb 26 00:56:52.291: INFO: Pod "pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8" satisfied condition "success or failure"
Feb 26 00:56:52.295: INFO: Trying to get logs from node jerma-node pod pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8 container configmap-volume-test: 
STEP: delete the pod
Feb 26 00:56:52.346: INFO: Waiting for pod pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8 to disappear
Feb 26 00:56:52.351: INFO: Pod pod-configmaps-2f695be2-93b4-4bcd-8bd4-d92a650530d8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:56:52.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6329" for this suite.

• [SLOW TEST:8.288 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":171,"skipped":2594,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:56:52.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:57:08.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9481" for this suite.

• [SLOW TEST:16.516 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":172,"skipped":2608,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:57:08.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 00:57:09.290: INFO: Waiting up to 5m0s for pod "downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd" in namespace "downward-api-9342" to be "success or failure"
Feb 26 00:57:09.322: INFO: Pod "downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.954412ms
Feb 26 00:57:11.329: INFO: Pod "downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038802763s
Feb 26 00:57:13.339: INFO: Pod "downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048784976s
Feb 26 00:57:15.347: INFO: Pod "downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056381964s
Feb 26 00:57:17.356: INFO: Pod "downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065529251s
Feb 26 00:57:19.365: INFO: Pod "downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074552151s
STEP: Saw pod success
Feb 26 00:57:19.365: INFO: Pod "downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd" satisfied condition "success or failure"
Feb 26 00:57:19.370: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd container client-container: 
STEP: delete the pod
Feb 26 00:57:19.462: INFO: Waiting for pod downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd to disappear
Feb 26 00:57:19.539: INFO: Pod downwardapi-volume-335ae4ad-26b7-40d0-bef4-1250b0de45bd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:57:19.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9342" for this suite.

• [SLOW TEST:10.671 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":173,"skipped":2618,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:57:19.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:57:39.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3874" for this suite.
STEP: Destroying namespace "nsdeletetest-4135" for this suite.
Feb 26 00:57:39.030: INFO: Namespace nsdeletetest-4135 was already deleted
STEP: Destroying namespace "nsdeletetest-4437" for this suite.

• [SLOW TEST:19.476 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":174,"skipped":2657,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:57:39.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 26 00:57:49.246: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2250 PodName:pod-sharedvolume-b2b578e7-73b9-47f1-a928-523597e96cdf ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 00:57:49.247: INFO: >>> kubeConfig: /root/.kube/config
I0226 00:57:49.309242       9 log.go:172] (0xc0045a22c0) (0xc0023fa5a0) Create stream
I0226 00:57:49.309389       9 log.go:172] (0xc0045a22c0) (0xc0023fa5a0) Stream added, broadcasting: 1
I0226 00:57:49.318065       9 log.go:172] (0xc0045a22c0) Reply frame received for 1
I0226 00:57:49.318175       9 log.go:172] (0xc0045a22c0) (0xc0028d3540) Create stream
I0226 00:57:49.318192       9 log.go:172] (0xc0045a22c0) (0xc0028d3540) Stream added, broadcasting: 3
I0226 00:57:49.320627       9 log.go:172] (0xc0045a22c0) Reply frame received for 3
I0226 00:57:49.320685       9 log.go:172] (0xc0045a22c0) (0xc001e3e320) Create stream
I0226 00:57:49.320710       9 log.go:172] (0xc0045a22c0) (0xc001e3e320) Stream added, broadcasting: 5
I0226 00:57:49.322423       9 log.go:172] (0xc0045a22c0) Reply frame received for 5
I0226 00:57:49.395464       9 log.go:172] (0xc0045a22c0) Data frame received for 3
I0226 00:57:49.395654       9 log.go:172] (0xc0028d3540) (3) Data frame handling
I0226 00:57:49.395684       9 log.go:172] (0xc0028d3540) (3) Data frame sent
I0226 00:57:49.463948       9 log.go:172] (0xc0045a22c0) (0xc0028d3540) Stream removed, broadcasting: 3
I0226 00:57:49.464070       9 log.go:172] (0xc0045a22c0) Data frame received for 1
I0226 00:57:49.464104       9 log.go:172] (0xc0023fa5a0) (1) Data frame handling
I0226 00:57:49.464135       9 log.go:172] (0xc0023fa5a0) (1) Data frame sent
I0226 00:57:49.464163       9 log.go:172] (0xc0045a22c0) (0xc001e3e320) Stream removed, broadcasting: 5
I0226 00:57:49.464196       9 log.go:172] (0xc0045a22c0) (0xc0023fa5a0) Stream removed, broadcasting: 1
I0226 00:57:49.464207       9 log.go:172] (0xc0045a22c0) Go away received
I0226 00:57:49.464560       9 log.go:172] (0xc0045a22c0) (0xc0023fa5a0) Stream removed, broadcasting: 1
I0226 00:57:49.464573       9 log.go:172] (0xc0045a22c0) (0xc0028d3540) Stream removed, broadcasting: 3
I0226 00:57:49.464581       9 log.go:172] (0xc0045a22c0) (0xc001e3e320) Stream removed, broadcasting: 5
Feb 26 00:57:49.464: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:57:49.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2250" for this suite.

• [SLOW TEST:10.451 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":175,"skipped":2680,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:57:49.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9567.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9567.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9567.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9567.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9567.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9567.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9567.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9567.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9567.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9567.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 31.59.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.59.31_udp@PTR;check="$$(dig +tcp +noall +answer +search 31.59.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.59.31_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9567.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9567.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9567.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9567.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9567.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9567.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9567.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9567.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9567.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9567.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9567.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 31.59.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.59.31_udp@PTR;check="$$(dig +tcp +noall +answer +search 31.59.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.59.31_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 26 00:58:05.788: INFO: Unable to read wheezy_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:05.793: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:05.797: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:05.802: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:05.983: INFO: Unable to read jessie_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:05.989: INFO: Unable to read jessie_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:06.002: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:06.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:06.041: INFO: Lookups using dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47 failed for: [wheezy_udp@dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_udp@dns-test-service.dns-9567.svc.cluster.local jessie_tcp@dns-test-service.dns-9567.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local]

Feb 26 00:58:11.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:11.062: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:11.066: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:11.069: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:11.093: INFO: Unable to read jessie_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:11.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:11.098: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:11.101: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:11.118: INFO: Lookups using dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47 failed for: [wheezy_udp@dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_udp@dns-test-service.dns-9567.svc.cluster.local jessie_tcp@dns-test-service.dns-9567.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local]

Feb 26 00:58:16.054: INFO: Unable to read wheezy_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:16.061: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:16.071: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:16.076: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:16.100: INFO: Unable to read jessie_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:16.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:16.114: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:16.116: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:16.138: INFO: Lookups using dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47 failed for: [wheezy_udp@dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_udp@dns-test-service.dns-9567.svc.cluster.local jessie_tcp@dns-test-service.dns-9567.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local]

Feb 26 00:58:21.050: INFO: Unable to read wheezy_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:21.055: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:21.058: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:21.062: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:21.091: INFO: Unable to read jessie_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:21.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:21.100: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:21.102: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:21.131: INFO: Lookups using dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47 failed for: [wheezy_udp@dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_udp@dns-test-service.dns-9567.svc.cluster.local jessie_tcp@dns-test-service.dns-9567.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local]

Feb 26 00:58:26.060: INFO: Unable to read wheezy_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:26.073: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:26.079: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:26.087: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:26.167: INFO: Unable to read jessie_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:26.173: INFO: Unable to read jessie_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:26.180: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:26.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:26.224: INFO: Lookups using dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47 failed for: [wheezy_udp@dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_udp@dns-test-service.dns-9567.svc.cluster.local jessie_tcp@dns-test-service.dns-9567.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local]

Feb 26 00:58:31.090: INFO: Unable to read wheezy_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:31.133: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:31.137: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:31.141: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:31.168: INFO: Unable to read jessie_udp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:31.171: INFO: Unable to read jessie_tcp@dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:31.174: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:31.178: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local from pod dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47: the server could not find the requested resource (get pods dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47)
Feb 26 00:58:31.196: INFO: Lookups using dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47 failed for: [wheezy_udp@dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@dns-test-service.dns-9567.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_udp@dns-test-service.dns-9567.svc.cluster.local jessie_tcp@dns-test-service.dns-9567.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9567.svc.cluster.local]

Feb 26 00:58:36.136: INFO: DNS probes using dns-9567/dns-test-4e6e1518-b425-44e6-b199-a8d0be02fa47 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:58:36.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9567" for this suite.

• [SLOW TEST:47.054 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":280,"completed":176,"skipped":2701,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:58:36.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 00:58:36.670: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e" in namespace "security-context-test-3901" to be "success or failure"
Feb 26 00:58:36.680: INFO: Pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.232982ms
Feb 26 00:58:38.686: INFO: Pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016150705s
Feb 26 00:58:40.700: INFO: Pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030376412s
Feb 26 00:58:42.708: INFO: Pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038302513s
Feb 26 00:58:44.713: INFO: Pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043222805s
Feb 26 00:58:46.721: INFO: Pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050814907s
Feb 26 00:58:46.721: INFO: Pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e" satisfied condition "success or failure"
Feb 26 00:58:46.926: INFO: Got logs for pod "busybox-privileged-false-722c25cd-3e7b-48f1-813e-bf9d1e59c52e": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:58:46.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3901" for this suite.

• [SLOW TEST:10.410 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":177,"skipped":2712,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:58:46.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating server pod server in namespace prestop-8468
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8468
STEP: Deleting pre-stop pod
Feb 26 00:59:12.386: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:59:12.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8468" for this suite.

• [SLOW TEST:25.471 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":280,"completed":178,"skipped":2792,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:59:12.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 00:59:12.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787" in namespace "projected-1959" to be "success or failure"
Feb 26 00:59:12.643: INFO: Pod "downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787": Phase="Pending", Reason="", readiness=false. Elapsed: 19.142676ms
Feb 26 00:59:14.651: INFO: Pod "downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027713092s
Feb 26 00:59:16.658: INFO: Pod "downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034284077s
Feb 26 00:59:18.667: INFO: Pod "downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043534977s
Feb 26 00:59:20.677: INFO: Pod "downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053506878s
Feb 26 00:59:22.719: INFO: Pod "downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095779274s
STEP: Saw pod success
Feb 26 00:59:22.719: INFO: Pod "downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787" satisfied condition "success or failure"
Feb 26 00:59:22.738: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787 container client-container: 
STEP: delete the pod
Feb 26 00:59:22.793: INFO: Waiting for pod downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787 to disappear
Feb 26 00:59:22.798: INFO: Pod downwardapi-volume-6439d6a6-b652-4c86-826a-0cb4f07d3787 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:59:22.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1959" for this suite.

• [SLOW TEST:10.385 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":179,"skipped":2801,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:59:22.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Feb 26 00:59:23.078: INFO: Waiting up to 5m0s for pod "client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c" in namespace "containers-1896" to be "success or failure"
Feb 26 00:59:23.092: INFO: Pod "client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.422507ms
Feb 26 00:59:25.099: INFO: Pod "client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020799671s
Feb 26 00:59:27.107: INFO: Pod "client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029609987s
Feb 26 00:59:29.114: INFO: Pod "client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036067707s
Feb 26 00:59:31.119: INFO: Pod "client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c": Phase="Running", Reason="", readiness=true. Elapsed: 8.040803509s
Feb 26 00:59:33.127: INFO: Pod "client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049471203s
STEP: Saw pod success
Feb 26 00:59:33.128: INFO: Pod "client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c" satisfied condition "success or failure"
Feb 26 00:59:33.132: INFO: Trying to get logs from node jerma-node pod client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c container test-container: 
STEP: delete the pod
Feb 26 00:59:33.169: INFO: Waiting for pod client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c to disappear
Feb 26 00:59:33.210: INFO: Pod client-containers-847b719a-cb19-4c06-a157-eae2f9150d9c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 00:59:33.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1896" for this suite.

• [SLOW TEST:10.411 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":180,"skipped":2808,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 00:59:33.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 26 00:59:33.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7949'
Feb 26 00:59:36.603: INFO: stderr: ""
Feb 26 00:59:36.604: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 00:59:36.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 00:59:36.743: INFO: stderr: ""
Feb 26 00:59:36.743: INFO: stdout: "update-demo-nautilus-wgmf7 update-demo-nautilus-x846j "
Feb 26 00:59:36.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgmf7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 00:59:36.922: INFO: stderr: ""
Feb 26 00:59:36.923: INFO: stdout: ""
Feb 26 00:59:36.923: INFO: update-demo-nautilus-wgmf7 is created but not running
Feb 26 00:59:41.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 00:59:42.975: INFO: stderr: ""
Feb 26 00:59:42.975: INFO: stdout: "update-demo-nautilus-wgmf7 update-demo-nautilus-x846j "
Feb 26 00:59:42.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgmf7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 00:59:44.330: INFO: stderr: ""
Feb 26 00:59:44.330: INFO: stdout: ""
Feb 26 00:59:44.330: INFO: update-demo-nautilus-wgmf7 is created but not running
Feb 26 00:59:49.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 00:59:49.466: INFO: stderr: ""
Feb 26 00:59:49.467: INFO: stdout: "update-demo-nautilus-wgmf7 update-demo-nautilus-x846j "
Feb 26 00:59:49.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgmf7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 00:59:49.583: INFO: stderr: ""
Feb 26 00:59:49.583: INFO: stdout: "true"
Feb 26 00:59:49.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgmf7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 00:59:49.718: INFO: stderr: ""
Feb 26 00:59:49.718: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 00:59:49.718: INFO: validating pod update-demo-nautilus-wgmf7
Feb 26 00:59:49.724: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 00:59:49.725: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 00:59:49.725: INFO: update-demo-nautilus-wgmf7 is verified up and running
Feb 26 00:59:49.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x846j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 00:59:49.823: INFO: stderr: ""
Feb 26 00:59:49.824: INFO: stdout: "true"
Feb 26 00:59:49.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x846j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 00:59:49.947: INFO: stderr: ""
Feb 26 00:59:49.947: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 00:59:49.947: INFO: validating pod update-demo-nautilus-x846j
Feb 26 00:59:49.957: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 00:59:49.957: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 00:59:49.957: INFO: update-demo-nautilus-x846j is verified up and running
STEP: scaling down the replication controller
Feb 26 00:59:49.960: INFO: scanned /root for discovery docs: 
Feb 26 00:59:49.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7949'
Feb 26 00:59:51.127: INFO: stderr: ""
Feb 26 00:59:51.127: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 00:59:51.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 00:59:51.250: INFO: stderr: ""
Feb 26 00:59:51.250: INFO: stdout: "update-demo-nautilus-wgmf7 update-demo-nautilus-x846j "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 26 00:59:56.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 00:59:56.443: INFO: stderr: ""
Feb 26 00:59:56.443: INFO: stdout: "update-demo-nautilus-wgmf7 update-demo-nautilus-x846j "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 26 01:00:01.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 01:00:01.607: INFO: stderr: ""
Feb 26 01:00:01.607: INFO: stdout: "update-demo-nautilus-wgmf7 update-demo-nautilus-x846j "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 26 01:00:06.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 01:00:06.753: INFO: stderr: ""
Feb 26 01:00:06.753: INFO: stdout: "update-demo-nautilus-wgmf7 "
Feb 26 01:00:06.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgmf7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 01:00:06.859: INFO: stderr: ""
Feb 26 01:00:06.859: INFO: stdout: "true"
Feb 26 01:00:06.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgmf7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 01:00:06.975: INFO: stderr: ""
Feb 26 01:00:06.975: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 01:00:06.975: INFO: validating pod update-demo-nautilus-wgmf7
Feb 26 01:00:06.981: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 01:00:06.981: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 01:00:06.981: INFO: update-demo-nautilus-wgmf7 is verified up and running
STEP: scaling up the replication controller
Feb 26 01:00:06.992: INFO: scanned /root for discovery docs: 
Feb 26 01:00:06.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7949'
Feb 26 01:00:08.195: INFO: stderr: ""
Feb 26 01:00:08.195: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 01:00:08.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 01:00:08.344: INFO: stderr: ""
Feb 26 01:00:08.344: INFO: stdout: "update-demo-nautilus-p7vm7 update-demo-nautilus-wgmf7 "
Feb 26 01:00:08.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p7vm7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 01:00:08.453: INFO: stderr: ""
Feb 26 01:00:08.453: INFO: stdout: ""
Feb 26 01:00:08.453: INFO: update-demo-nautilus-p7vm7 is created but not running
Feb 26 01:00:13.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7949'
Feb 26 01:00:13.601: INFO: stderr: ""
Feb 26 01:00:13.602: INFO: stdout: "update-demo-nautilus-p7vm7 update-demo-nautilus-wgmf7 "
Feb 26 01:00:13.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p7vm7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 01:00:13.714: INFO: stderr: ""
Feb 26 01:00:13.715: INFO: stdout: "true"
Feb 26 01:00:13.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p7vm7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 01:00:13.904: INFO: stderr: ""
Feb 26 01:00:13.904: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 01:00:13.904: INFO: validating pod update-demo-nautilus-p7vm7
Feb 26 01:00:13.928: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 01:00:13.928: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 01:00:13.928: INFO: update-demo-nautilus-p7vm7 is verified up and running
Feb 26 01:00:13.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgmf7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 01:00:14.042: INFO: stderr: ""
Feb 26 01:00:14.043: INFO: stdout: "true"
Feb 26 01:00:14.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgmf7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7949'
Feb 26 01:00:14.159: INFO: stderr: ""
Feb 26 01:00:14.159: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 01:00:14.159: INFO: validating pod update-demo-nautilus-wgmf7
Feb 26 01:00:14.168: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 01:00:14.168: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 01:00:14.168: INFO: update-demo-nautilus-wgmf7 is verified up and running
STEP: using delete to clean up resources
Feb 26 01:00:14.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7949'
Feb 26 01:00:14.280: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 26 01:00:14.281: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 26 01:00:14.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7949'
Feb 26 01:00:14.371: INFO: stderr: "No resources found in kubectl-7949 namespace.\n"
Feb 26 01:00:14.371: INFO: stdout: ""
Feb 26 01:00:14.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7949 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 01:00:14.486: INFO: stderr: ""
Feb 26 01:00:14.486: INFO: stdout: "update-demo-nautilus-p7vm7\nupdate-demo-nautilus-wgmf7\n"
Feb 26 01:00:14.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7949'
Feb 26 01:00:15.789: INFO: stderr: "No resources found in kubectl-7949 namespace.\n"
Feb 26 01:00:15.789: INFO: stdout: ""
Feb 26 01:00:15.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7949 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 01:00:16.224: INFO: stderr: ""
Feb 26 01:00:16.225: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:00:16.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7949" for this suite.

• [SLOW TEST:43.011 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":280,"completed":181,"skipped":2809,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:00:16.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-902a223d-8e8d-4ded-a6d2-1b90ab8c8075
STEP: Creating a pod to test consume configMaps
Feb 26 01:00:16.561: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241" in namespace "projected-5472" to be "success or failure"
Feb 26 01:00:16.727: INFO: Pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241": Phase="Pending", Reason="", readiness=false. Elapsed: 165.904856ms
Feb 26 01:00:18.835: INFO: Pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273709814s
Feb 26 01:00:20.887: INFO: Pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3251692s
Feb 26 01:00:22.893: INFO: Pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331186536s
Feb 26 01:00:25.464: INFO: Pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241": Phase="Pending", Reason="", readiness=false. Elapsed: 8.902764673s
Feb 26 01:00:27.547: INFO: Pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241": Phase="Pending", Reason="", readiness=false. Elapsed: 10.985794616s
Feb 26 01:00:29.555: INFO: Pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.993029473s
STEP: Saw pod success
Feb 26 01:00:29.555: INFO: Pod "pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241" satisfied condition "success or failure"
Feb 26 01:00:29.560: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 01:00:29.769: INFO: Waiting for pod pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241 to disappear
Feb 26 01:00:29.873: INFO: Pod pod-projected-configmaps-00bc76d9-bef6-4a06-b85e-b786001fd241 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:00:29.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5472" for this suite.

• [SLOW TEST:13.650 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":182,"skipped":2856,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:00:29.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:00:30.055: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 26 01:00:30.075: INFO: Number of nodes with available pods: 0
Feb 26 01:00:30.076: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 26 01:00:30.131: INFO: Number of nodes with available pods: 0
Feb 26 01:00:30.131: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:31.230: INFO: Number of nodes with available pods: 0
Feb 26 01:00:31.230: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:32.136: INFO: Number of nodes with available pods: 0
Feb 26 01:00:32.136: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:33.139: INFO: Number of nodes with available pods: 0
Feb 26 01:00:33.139: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:34.148: INFO: Number of nodes with available pods: 0
Feb 26 01:00:34.149: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:35.334: INFO: Number of nodes with available pods: 0
Feb 26 01:00:35.334: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:36.225: INFO: Number of nodes with available pods: 0
Feb 26 01:00:36.226: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:37.472: INFO: Number of nodes with available pods: 0
Feb 26 01:00:37.472: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:38.140: INFO: Number of nodes with available pods: 1
Feb 26 01:00:38.140: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 26 01:00:38.206: INFO: Number of nodes with available pods: 1
Feb 26 01:00:38.206: INFO: Number of running nodes: 0, number of available pods: 1
Feb 26 01:00:39.214: INFO: Number of nodes with available pods: 0
Feb 26 01:00:39.214: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 26 01:00:39.234: INFO: Number of nodes with available pods: 0
Feb 26 01:00:39.234: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:40.240: INFO: Number of nodes with available pods: 0
Feb 26 01:00:40.240: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:41.243: INFO: Number of nodes with available pods: 0
Feb 26 01:00:41.243: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:42.333: INFO: Number of nodes with available pods: 0
Feb 26 01:00:42.333: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:43.260: INFO: Number of nodes with available pods: 0
Feb 26 01:00:43.260: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:44.241: INFO: Number of nodes with available pods: 0
Feb 26 01:00:44.241: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:45.242: INFO: Number of nodes with available pods: 0
Feb 26 01:00:45.242: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:46.244: INFO: Number of nodes with available pods: 0
Feb 26 01:00:46.244: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:47.244: INFO: Number of nodes with available pods: 0
Feb 26 01:00:47.244: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:48.243: INFO: Number of nodes with available pods: 0
Feb 26 01:00:48.244: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:49.243: INFO: Number of nodes with available pods: 0
Feb 26 01:00:49.243: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:50.241: INFO: Number of nodes with available pods: 0
Feb 26 01:00:50.242: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:51.240: INFO: Number of nodes with available pods: 0
Feb 26 01:00:51.241: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:52.241: INFO: Number of nodes with available pods: 0
Feb 26 01:00:52.241: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:53.251: INFO: Number of nodes with available pods: 0
Feb 26 01:00:53.251: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:54.489: INFO: Number of nodes with available pods: 0
Feb 26 01:00:54.490: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:55.242: INFO: Number of nodes with available pods: 0
Feb 26 01:00:55.243: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:56.245: INFO: Number of nodes with available pods: 0
Feb 26 01:00:56.245: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:57.366: INFO: Number of nodes with available pods: 0
Feb 26 01:00:57.366: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:00:59.463: INFO: Number of nodes with available pods: 0
Feb 26 01:00:59.463: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:00.510: INFO: Number of nodes with available pods: 0
Feb 26 01:01:00.510: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:01.239: INFO: Number of nodes with available pods: 0
Feb 26 01:01:01.240: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:02.240: INFO: Number of nodes with available pods: 1
Feb 26 01:01:02.240: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6736, will wait for the garbage collector to delete the pods
Feb 26 01:01:02.307: INFO: Deleting DaemonSet.extensions daemon-set took: 8.735912ms
Feb 26 01:01:02.607: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.546301ms
Feb 26 01:01:13.139: INFO: Number of nodes with available pods: 0
Feb 26 01:01:13.139: INFO: Number of running nodes: 0, number of available pods: 0
Feb 26 01:01:13.144: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6736/daemonsets","resourceVersion":"10773631"},"items":null}

Feb 26 01:01:13.147: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6736/pods","resourceVersion":"10773631"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:01:13.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6736" for this suite.

• [SLOW TEST:43.456 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":183,"skipped":2862,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:01:13.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:01:21.590: INFO: Waiting up to 5m0s for pod "client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6" in namespace "pods-2490" to be "success or failure"
Feb 26 01:01:21.595: INFO: Pod "client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.917572ms
Feb 26 01:01:23.606: INFO: Pod "client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015909176s
Feb 26 01:01:25.614: INFO: Pod "client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024448922s
Feb 26 01:01:27.620: INFO: Pod "client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030613633s
Feb 26 01:01:29.629: INFO: Pod "client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038905019s
STEP: Saw pod success
Feb 26 01:01:29.629: INFO: Pod "client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6" satisfied condition "success or failure"
Feb 26 01:01:29.636: INFO: Trying to get logs from node jerma-node pod client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6 container env3cont: 
STEP: delete the pod
Feb 26 01:01:30.257: INFO: Waiting for pod client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6 to disappear
Feb 26 01:01:30.490: INFO: Pod client-envvars-58040d7a-ac2a-4fa7-9b51-0d981f968ec6 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:01:30.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2490" for this suite.

• [SLOW TEST:17.202 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":184,"skipped":2864,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:01:30.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 26 01:01:31.074: INFO: Number of nodes with available pods: 0
Feb 26 01:01:31.074: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:32.821: INFO: Number of nodes with available pods: 0
Feb 26 01:01:32.821: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:33.313: INFO: Number of nodes with available pods: 0
Feb 26 01:01:33.313: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:34.189: INFO: Number of nodes with available pods: 0
Feb 26 01:01:34.189: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:35.101: INFO: Number of nodes with available pods: 0
Feb 26 01:01:35.101: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:37.461: INFO: Number of nodes with available pods: 0
Feb 26 01:01:37.461: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:38.118: INFO: Number of nodes with available pods: 0
Feb 26 01:01:38.118: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:39.544: INFO: Number of nodes with available pods: 0
Feb 26 01:01:39.544: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:40.361: INFO: Number of nodes with available pods: 0
Feb 26 01:01:40.361: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:41.093: INFO: Number of nodes with available pods: 1
Feb 26 01:01:41.093: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:42.086: INFO: Number of nodes with available pods: 1
Feb 26 01:01:42.086: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:01:43.087: INFO: Number of nodes with available pods: 2
Feb 26 01:01:43.087: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 26 01:01:43.148: INFO: Number of nodes with available pods: 1
Feb 26 01:01:43.148: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:44.274: INFO: Number of nodes with available pods: 1
Feb 26 01:01:44.274: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:45.402: INFO: Number of nodes with available pods: 1
Feb 26 01:01:45.403: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:46.160: INFO: Number of nodes with available pods: 1
Feb 26 01:01:46.160: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:47.831: INFO: Number of nodes with available pods: 1
Feb 26 01:01:47.832: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:48.163: INFO: Number of nodes with available pods: 1
Feb 26 01:01:48.163: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:49.160: INFO: Number of nodes with available pods: 1
Feb 26 01:01:49.160: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:50.605: INFO: Number of nodes with available pods: 1
Feb 26 01:01:50.605: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:51.411: INFO: Number of nodes with available pods: 1
Feb 26 01:01:51.411: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:52.162: INFO: Number of nodes with available pods: 1
Feb 26 01:01:52.162: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:01:53.156: INFO: Number of nodes with available pods: 2
Feb 26 01:01:53.156: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5046, will wait for the garbage collector to delete the pods
Feb 26 01:01:53.229: INFO: Deleting DaemonSet.extensions daemon-set took: 15.117696ms
Feb 26 01:01:53.529: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.4996ms
Feb 26 01:02:03.134: INFO: Number of nodes with available pods: 0
Feb 26 01:02:03.134: INFO: Number of running nodes: 0, number of available pods: 0
Feb 26 01:02:03.138: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5046/daemonsets","resourceVersion":"10773863"},"items":null}

Feb 26 01:02:03.140: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5046/pods","resourceVersion":"10773863"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:02:03.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5046" for this suite.

• [SLOW TEST:32.610 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":185,"skipped":2864,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:02:03.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-9462e338-2dc2-47ef-b35b-a71aee2e8c52 in namespace container-probe-2521
Feb 26 01:02:13.287: INFO: Started pod busybox-9462e338-2dc2-47ef-b35b-a71aee2e8c52 in namespace container-probe-2521
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 01:02:13.291: INFO: Initial restart count of pod busybox-9462e338-2dc2-47ef-b35b-a71aee2e8c52 is 0
Feb 26 01:03:05.569: INFO: Restart count of pod container-probe-2521/busybox-9462e338-2dc2-47ef-b35b-a71aee2e8c52 is now 1 (52.278337715s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:03:05.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2521" for this suite.

• [SLOW TEST:62.519 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":186,"skipped":2869,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:03:05.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb 26 01:03:05.768: INFO: >>> kubeConfig: /root/.kube/config
Feb 26 01:03:08.808: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:03:20.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-670" for this suite.

• [SLOW TEST:14.416 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":187,"skipped":2877,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:03:20.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-0d24a2c7-7bf2-433a-a3e9-cd0a86690712
STEP: Creating a pod to test consume secrets
Feb 26 01:03:20.232: INFO: Waiting up to 5m0s for pod "pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6" in namespace "secrets-428" to be "success or failure"
Feb 26 01:03:20.249: INFO: Pod "pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.949789ms
Feb 26 01:03:22.257: INFO: Pod "pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024876645s
Feb 26 01:03:24.264: INFO: Pod "pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031408108s
Feb 26 01:03:26.280: INFO: Pod "pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047563456s
Feb 26 01:03:28.289: INFO: Pod "pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056437468s
Feb 26 01:03:30.298: INFO: Pod "pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065121637s
STEP: Saw pod success
Feb 26 01:03:30.298: INFO: Pod "pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6" satisfied condition "success or failure"
Feb 26 01:03:30.303: INFO: Trying to get logs from node jerma-node pod pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6 container secret-volume-test: 
STEP: delete the pod
Feb 26 01:03:30.572: INFO: Waiting for pod pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6 to disappear
Feb 26 01:03:30.578: INFO: Pod pod-secrets-540af6c5-ebdf-43f9-9f09-458c4ae728d6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:03:30.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-428" for this suite.

• [SLOW TEST:10.496 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":188,"skipped":2894,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:03:30.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 26 01:03:31.324: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 26 01:03:33.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:03:35.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:03:37.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718275811, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 26 01:03:40.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:03:52.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1821" for this suite.
STEP: Destroying namespace "webhook-1821-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:22.534 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":189,"skipped":2923,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:03:53.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 26 01:03:53.215: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-a 024f4046-448e-4dca-ae59-019d5c401100 10774281 0 2020-02-26 01:03:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 26 01:03:53.215: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-a 024f4046-448e-4dca-ae59-019d5c401100 10774281 0 2020-02-26 01:03:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 26 01:04:03.228: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-a 024f4046-448e-4dca-ae59-019d5c401100 10774322 0 2020-02-26 01:03:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 26 01:04:03.228: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-a 024f4046-448e-4dca-ae59-019d5c401100 10774322 0 2020-02-26 01:03:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 26 01:04:13.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-a 024f4046-448e-4dca-ae59-019d5c401100 10774346 0 2020-02-26 01:03:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 26 01:04:13.240: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-a 024f4046-448e-4dca-ae59-019d5c401100 10774346 0 2020-02-26 01:03:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 26 01:04:23.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-a 024f4046-448e-4dca-ae59-019d5c401100 10774368 0 2020-02-26 01:03:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 26 01:04:23.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-a 024f4046-448e-4dca-ae59-019d5c401100 10774368 0 2020-02-26 01:03:53 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 26 01:04:33.725: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-b 85cb93ef-8589-44c0-bef4-762f81673eda 10774393 0 2020-02-26 01:04:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 26 01:04:33.725: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-b 85cb93ef-8589-44c0-bef4-762f81673eda 10774393 0 2020-02-26 01:04:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 26 01:04:43.740: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-b 85cb93ef-8589-44c0-bef4-762f81673eda 10774417 0 2020-02-26 01:04:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 26 01:04:43.741: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3425 /api/v1/namespaces/watch-3425/configmaps/e2e-watch-test-configmap-b 85cb93ef-8589-44c0-bef4-762f81673eda 10774417 0 2020-02-26 01:04:33 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:04:53.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3425" for this suite.

• [SLOW TEST:60.635 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":190,"skipped":2934,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:04:53.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name projected-secret-test-3f577ce1-340a-49b2-996b-3072cecb45c1
STEP: Creating a pod to test consume secrets
Feb 26 01:04:53.909: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee" in namespace "projected-4935" to be "success or failure"
Feb 26 01:04:53.919: INFO: Pod "pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee": Phase="Pending", Reason="", readiness=false. Elapsed: 9.755005ms
Feb 26 01:04:55.930: INFO: Pod "pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020532202s
Feb 26 01:04:57.937: INFO: Pod "pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027350798s
Feb 26 01:04:59.986: INFO: Pod "pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077070302s
Feb 26 01:05:02.005: INFO: Pod "pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095529189s
STEP: Saw pod success
Feb 26 01:05:02.005: INFO: Pod "pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee" satisfied condition "success or failure"
Feb 26 01:05:02.010: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee container secret-volume-test: 
STEP: delete the pod
Feb 26 01:05:02.155: INFO: Waiting for pod pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee to disappear
Feb 26 01:05:02.196: INFO: Pod pod-projected-secrets-d5c5deab-67ff-4f79-9b83-c373dce154ee no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:05:02.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4935" for this suite.

• [SLOW TEST:8.445 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":191,"skipped":2937,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:05:02.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 01:05:02.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f" in namespace "projected-7142" to be "success or failure"
Feb 26 01:05:02.446: INFO: Pod "downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.138576ms
Feb 26 01:05:05.190: INFO: Pod "downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.76109046s
Feb 26 01:05:07.197: INFO: Pod "downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.767737383s
Feb 26 01:05:09.203: INFO: Pod "downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.773617632s
Feb 26 01:05:11.427: INFO: Pod "downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.997210827s
Feb 26 01:05:13.441: INFO: Pod "downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.011615992s
STEP: Saw pod success
Feb 26 01:05:13.441: INFO: Pod "downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f" satisfied condition "success or failure"
Feb 26 01:05:13.447: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f container client-container: 
STEP: delete the pod
Feb 26 01:05:13.540: INFO: Waiting for pod downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f to disappear
Feb 26 01:05:13.601: INFO: Pod downwardapi-volume-daf8ca46-a884-4ac3-88f2-6cbfc7462d4f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:05:13.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7142" for this suite.

• [SLOW TEST:11.413 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":192,"skipped":2942,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:05:13.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-bcc3dba4-0faf-432b-ab64-ac2da7cfef4a
STEP: Creating configMap with name cm-test-opt-upd-c6c039dd-19ea-4c80-876a-d3e01585589a
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-bcc3dba4-0faf-432b-ab64-ac2da7cfef4a
STEP: Updating configmap cm-test-opt-upd-c6c039dd-19ea-4c80-876a-d3e01585589a
STEP: Creating configMap with name cm-test-opt-create-562bf204-c9ac-4a48-8792-de2cf7a0a0bd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:05:32.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5800" for this suite.

• [SLOW TEST:18.503 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":193,"skipped":2942,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:05:32.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 26 01:05:42.427: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:05:42.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8394" for this suite.

• [SLOW TEST:10.386 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":194,"skipped":2963,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:05:42.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:05:42.633: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-717bb054-a5b0-43dd-8416-001768741fad" in namespace "security-context-test-870" to be "success or failure"
Feb 26 01:05:42.669: INFO: Pod "alpine-nnp-false-717bb054-a5b0-43dd-8416-001768741fad": Phase="Pending", Reason="", readiness=false. Elapsed: 34.998875ms
Feb 26 01:05:44.677: INFO: Pod "alpine-nnp-false-717bb054-a5b0-43dd-8416-001768741fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043555261s
Feb 26 01:05:46.688: INFO: Pod "alpine-nnp-false-717bb054-a5b0-43dd-8416-001768741fad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054028746s
Feb 26 01:05:49.757: INFO: Pod "alpine-nnp-false-717bb054-a5b0-43dd-8416-001768741fad": Phase="Pending", Reason="", readiness=false. Elapsed: 7.123700954s
Feb 26 01:05:51.776: INFO: Pod "alpine-nnp-false-717bb054-a5b0-43dd-8416-001768741fad": Phase="Pending", Reason="", readiness=false. Elapsed: 9.142560305s
Feb 26 01:05:53.835: INFO: Pod "alpine-nnp-false-717bb054-a5b0-43dd-8416-001768741fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.200714029s
Feb 26 01:05:53.835: INFO: Pod "alpine-nnp-false-717bb054-a5b0-43dd-8416-001768741fad" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:05:53.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-870" for this suite.

• [SLOW TEST:11.412 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":195,"skipped":2972,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:05:53.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:06:28.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-418" for this suite.

• [SLOW TEST:34.252 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":196,"skipped":2999,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:06:28.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0226 01:07:12.557664       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 01:07:12.557: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:07:12.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4184" for this suite.

• [SLOW TEST:44.397 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":197,"skipped":3020,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:07:12.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:07:12.662: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:07:20.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9201" for this suite.

• [SLOW TEST:8.421 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":280,"completed":198,"skipped":3022,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:07:21.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 26 01:07:21.601: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 26 01:07:24.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:07:27.385: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:07:29.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:07:31.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:07:33.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:07:34.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:07:36.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:07:38.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:07:40.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276044, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276041, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 26 01:07:43.979: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:07:43.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:07:45.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1424" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:24.546 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":199,"skipped":3039,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:07:45.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 26 01:07:45.688: INFO: Waiting up to 5m0s for pod "pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c" in namespace "emptydir-3348" to be "success or failure"
Feb 26 01:07:45.823: INFO: Pod "pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c": Phase="Pending", Reason="", readiness=false. Elapsed: 133.928348ms
Feb 26 01:07:47.832: INFO: Pod "pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143487799s
Feb 26 01:07:49.839: INFO: Pod "pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150718634s
Feb 26 01:07:51.848: INFO: Pod "pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159081717s
Feb 26 01:07:53.860: INFO: Pod "pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171089737s
Feb 26 01:07:55.869: INFO: Pod "pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.180067762s
STEP: Saw pod success
Feb 26 01:07:55.869: INFO: Pod "pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c" satisfied condition "success or failure"
Feb 26 01:07:55.873: INFO: Trying to get logs from node jerma-node pod pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c container test-container: 
STEP: delete the pod
Feb 26 01:07:56.183: INFO: Waiting for pod pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c to disappear
Feb 26 01:07:56.192: INFO: Pod pod-4fc5bae3-13a4-43cb-9725-3403a1e7669c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:07:56.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3348" for this suite.

• [SLOW TEST:10.686 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":200,"skipped":3059,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:07:56.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:07:56.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8700" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":201,"skipped":3062,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:07:58.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-b7608ba4-4472-42d4-99be-97c8f4bb9c10 in namespace container-probe-638
Feb 26 01:08:10.724: INFO: Started pod liveness-b7608ba4-4472-42d4-99be-97c8f4bb9c10 in namespace container-probe-638
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 01:08:10.729: INFO: Initial restart count of pod liveness-b7608ba4-4472-42d4-99be-97c8f4bb9c10 is 0
Feb 26 01:08:34.939: INFO: Restart count of pod container-probe-638/liveness-b7608ba4-4472-42d4-99be-97c8f4bb9c10 is now 1 (24.210401165s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:08:35.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-638" for this suite.

• [SLOW TEST:36.683 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":202,"skipped":3070,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:08:35.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:08:35.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9462" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":203,"skipped":3103,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:08:35.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 26 01:08:35.430: INFO: Waiting up to 5m0s for pod "pod-83314cd1-733c-478f-a48d-9437b680039d" in namespace "emptydir-4184" to be "success or failure"
Feb 26 01:08:35.507: INFO: Pod "pod-83314cd1-733c-478f-a48d-9437b680039d": Phase="Pending", Reason="", readiness=false. Elapsed: 77.331808ms
Feb 26 01:08:37.566: INFO: Pod "pod-83314cd1-733c-478f-a48d-9437b680039d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135764614s
Feb 26 01:08:39.573: INFO: Pod "pod-83314cd1-733c-478f-a48d-9437b680039d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142406795s
Feb 26 01:08:41.579: INFO: Pod "pod-83314cd1-733c-478f-a48d-9437b680039d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148744335s
Feb 26 01:08:43.587: INFO: Pod "pod-83314cd1-733c-478f-a48d-9437b680039d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157023554s
Feb 26 01:08:45.596: INFO: Pod "pod-83314cd1-733c-478f-a48d-9437b680039d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.165454469s
STEP: Saw pod success
Feb 26 01:08:45.596: INFO: Pod "pod-83314cd1-733c-478f-a48d-9437b680039d" satisfied condition "success or failure"
Feb 26 01:08:45.600: INFO: Trying to get logs from node jerma-node pod pod-83314cd1-733c-478f-a48d-9437b680039d container test-container: 
STEP: delete the pod
Feb 26 01:08:45.649: INFO: Waiting for pod pod-83314cd1-733c-478f-a48d-9437b680039d to disappear
Feb 26 01:08:45.783: INFO: Pod pod-83314cd1-733c-478f-a48d-9437b680039d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:08:45.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4184" for this suite.

• [SLOW TEST:10.734 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":204,"skipped":3105,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:08:45.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-5502
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5502 to expose endpoints map[]
Feb 26 01:08:47.393: INFO: successfully validated that service multi-endpoint-test in namespace services-5502 exposes endpoints map[] (34.708804ms elapsed)
STEP: Creating pod pod1 in namespace services-5502
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5502 to expose endpoints map[pod1:[100]]
Feb 26 01:08:51.700: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.264335089s elapsed, will retry)
Feb 26 01:08:55.849: INFO: successfully validated that service multi-endpoint-test in namespace services-5502 exposes endpoints map[pod1:[100]] (8.413170634s elapsed)
STEP: Creating pod pod2 in namespace services-5502
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5502 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 26 01:09:00.344: INFO: Unexpected endpoints: found map[56ac4473-41b8-4bf3-8fe4-2742568f56d5:[100]], expected map[pod1:[100] pod2:[101]] (4.47581854s elapsed, will retry)
Feb 26 01:09:04.522: INFO: successfully validated that service multi-endpoint-test in namespace services-5502 exposes endpoints map[pod1:[100] pod2:[101]] (8.653557605s elapsed)
STEP: Deleting pod pod1 in namespace services-5502
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5502 to expose endpoints map[pod2:[101]]
Feb 26 01:09:04.596: INFO: successfully validated that service multi-endpoint-test in namespace services-5502 exposes endpoints map[pod2:[101]] (66.300057ms elapsed)
STEP: Deleting pod pod2 in namespace services-5502
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5502 to expose endpoints map[]
Feb 26 01:09:05.627: INFO: successfully validated that service multi-endpoint-test in namespace services-5502 exposes endpoints map[] (1.015860203s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:09:05.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5502" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:19.808 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":280,"completed":205,"skipped":3126,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:09:05.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 26 01:09:05.928: INFO: Number of nodes with available pods: 0
Feb 26 01:09:05.928: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:08.426: INFO: Number of nodes with available pods: 0
Feb 26 01:09:08.426: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:09.384: INFO: Number of nodes with available pods: 0
Feb 26 01:09:09.384: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:10.199: INFO: Number of nodes with available pods: 0
Feb 26 01:09:10.199: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:12.032: INFO: Number of nodes with available pods: 0
Feb 26 01:09:12.033: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:12.940: INFO: Number of nodes with available pods: 0
Feb 26 01:09:12.940: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:13.938: INFO: Number of nodes with available pods: 0
Feb 26 01:09:13.938: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:15.480: INFO: Number of nodes with available pods: 0
Feb 26 01:09:15.480: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:16.378: INFO: Number of nodes with available pods: 0
Feb 26 01:09:16.379: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:17.769: INFO: Number of nodes with available pods: 0
Feb 26 01:09:17.769: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:17.952: INFO: Number of nodes with available pods: 0
Feb 26 01:09:17.952: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:18.943: INFO: Number of nodes with available pods: 1
Feb 26 01:09:18.943: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:09:19.949: INFO: Number of nodes with available pods: 2
Feb 26 01:09:19.949: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 26 01:09:20.005: INFO: Number of nodes with available pods: 1
Feb 26 01:09:20.005: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:21.018: INFO: Number of nodes with available pods: 1
Feb 26 01:09:21.019: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:22.026: INFO: Number of nodes with available pods: 1
Feb 26 01:09:22.026: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:23.017: INFO: Number of nodes with available pods: 1
Feb 26 01:09:23.018: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:24.016: INFO: Number of nodes with available pods: 1
Feb 26 01:09:24.017: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:25.021: INFO: Number of nodes with available pods: 1
Feb 26 01:09:25.022: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:26.023: INFO: Number of nodes with available pods: 1
Feb 26 01:09:26.024: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:27.019: INFO: Number of nodes with available pods: 1
Feb 26 01:09:27.019: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:28.014: INFO: Number of nodes with available pods: 1
Feb 26 01:09:28.014: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:29.029: INFO: Number of nodes with available pods: 1
Feb 26 01:09:29.029: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:30.017: INFO: Number of nodes with available pods: 1
Feb 26 01:09:30.017: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:31.015: INFO: Number of nodes with available pods: 1
Feb 26 01:09:31.015: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:32.021: INFO: Number of nodes with available pods: 1
Feb 26 01:09:32.021: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:33.018: INFO: Number of nodes with available pods: 1
Feb 26 01:09:33.018: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:34.019: INFO: Number of nodes with available pods: 1
Feb 26 01:09:34.019: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:35.033: INFO: Number of nodes with available pods: 1
Feb 26 01:09:35.034: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:36.018: INFO: Number of nodes with available pods: 1
Feb 26 01:09:36.018: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:37.039: INFO: Number of nodes with available pods: 1
Feb 26 01:09:37.039: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:38.021: INFO: Number of nodes with available pods: 1
Feb 26 01:09:38.021: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:09:39.087: INFO: Number of nodes with available pods: 2
Feb 26 01:09:39.088: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8496, will wait for the garbage collector to delete the pods
Feb 26 01:09:39.163: INFO: Deleting DaemonSet.extensions daemon-set took: 11.310056ms
Feb 26 01:09:39.563: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.623896ms
Feb 26 01:09:53.170: INFO: Number of nodes with available pods: 0
Feb 26 01:09:53.170: INFO: Number of running nodes: 0, number of available pods: 0
Feb 26 01:09:53.173: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8496/daemonsets","resourceVersion":"10775833"},"items":null}

Feb 26 01:09:53.176: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8496/pods","resourceVersion":"10775833"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:09:53.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8496" for this suite.

• [SLOW TEST:47.446 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":206,"skipped":3134,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:09:53.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0226 01:10:08.861886       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 01:10:08.862: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:10:08.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4223" for this suite.

• [SLOW TEST:18.280 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":207,"skipped":3175,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:10:11.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Feb 26 01:10:14.949: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Feb 26 01:10:18.482: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 26 01:10:25.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:27.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:30.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:31.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:33.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:35.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:37.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:39.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:41.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:43.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:46.395: INFO: Waited 948.760331ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:10:47.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5551" for this suite.

• [SLOW TEST:35.785 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":208,"skipped":3229,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:10:47.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 26 01:10:48.243: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 26 01:10:50.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:52.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:54.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:10:56.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276248, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 26 01:10:59.384: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
Feb 26 01:10:59.558: INFO: Waiting for webhook configuration to be ready...
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:10:59.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6471" for this suite.
STEP: Destroying namespace "webhook-6471-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.771 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":209,"skipped":3229,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:11:00.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:11:00.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb 26 01:11:00.465: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-26T01:11:00Z generation:1 name:name1 resourceVersion:10776253 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9317c688-4279-4c51-a299-e1277cd26dfb] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb 26 01:11:10.478: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-26T01:11:10Z generation:1 name:name2 resourceVersion:10776290 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:844f53ca-066f-4d0a-b68b-492642e428bb] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb 26 01:11:20.493: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-26T01:11:00Z generation:2 name:name1 resourceVersion:10776314 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9317c688-4279-4c51-a299-e1277cd26dfb] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb 26 01:11:30.504: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-26T01:11:10Z generation:2 name:name2 resourceVersion:10776338 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:844f53ca-066f-4d0a-b68b-492642e428bb] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb 26 01:11:40.520: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-26T01:11:00Z generation:2 name:name1 resourceVersion:10776358 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9317c688-4279-4c51-a299-e1277cd26dfb] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb 26 01:11:50.540: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-26T01:11:10Z generation:2 name:name2 resourceVersion:10776382 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:844f53ca-066f-4d0a-b68b-492642e428bb] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:12:01.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-4230" for this suite.

• [SLOW TEST:61.033 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":210,"skipped":3239,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:12:01.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 01:12:01.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0" in namespace "downward-api-6592" to be "success or failure"
Feb 26 01:12:01.365: INFO: Pod "downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.186346ms
Feb 26 01:12:03.402: INFO: Pod "downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056204499s
Feb 26 01:12:05.407: INFO: Pod "downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061078696s
Feb 26 01:12:07.415: INFO: Pod "downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068855539s
Feb 26 01:12:09.462: INFO: Pod "downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116110714s
STEP: Saw pod success
Feb 26 01:12:09.463: INFO: Pod "downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0" satisfied condition "success or failure"
Feb 26 01:12:09.468: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0 container client-container: 
STEP: delete the pod
Feb 26 01:12:09.948: INFO: Waiting for pod downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0 to disappear
Feb 26 01:12:09.954: INFO: Pod downwardapi-volume-e748b367-a5b2-4d35-8a02-86369d1bd2f0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:12:09.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6592" for this suite.

• [SLOW TEST:8.899 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":211,"skipped":3258,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:12:09.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0226 01:12:22.198742       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 01:12:22.198: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:12:22.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9944" for this suite.

• [SLOW TEST:12.242 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":212,"skipped":3261,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:12:22.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:12:30.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3911" for this suite.

• [SLOW TEST:8.167 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":213,"skipped":3284,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:12:30.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 01:12:30.522: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187" in namespace "projected-6542" to be "success or failure"
Feb 26 01:12:30.539: INFO: Pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187": Phase="Pending", Reason="", readiness=false. Elapsed: 16.663954ms
Feb 26 01:12:32.587: INFO: Pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065211584s
Feb 26 01:12:34.596: INFO: Pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073666134s
Feb 26 01:12:36.620: INFO: Pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098286336s
Feb 26 01:12:38.633: INFO: Pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110975646s
Feb 26 01:12:40.640: INFO: Pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118146247s
Feb 26 01:12:42.659: INFO: Pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.137105722s
STEP: Saw pod success
Feb 26 01:12:42.660: INFO: Pod "downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187" satisfied condition "success or failure"
Feb 26 01:12:42.675: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187 container client-container: 
STEP: delete the pod
Feb 26 01:12:42.768: INFO: Waiting for pod downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187 to disappear
Feb 26 01:12:42.819: INFO: Pod downwardapi-volume-e8bde47d-c9a6-4cbd-bd5c-ec597012f187 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:12:42.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6542" for this suite.

• [SLOW TEST:12.485 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":214,"skipped":3298,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:12:42.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5397 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5397;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5397 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5397;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5397.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5397.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5397.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5397.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5397.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5397.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5397.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5397.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5397.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5397.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5397.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 248.166.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.166.248_udp@PTR;check="$$(dig +tcp +noall +answer +search 248.166.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.166.248_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5397 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5397;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5397 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5397;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5397.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5397.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5397.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5397.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5397.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5397.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5397.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5397.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5397.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5397.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5397.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5397.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 248.166.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.166.248_udp@PTR;check="$$(dig +tcp +noall +answer +search 248.166.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.166.248_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 26 01:12:55.414: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.419: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.436: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.497: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.506: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.513: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.522: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.532: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.575: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.580: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.584: INFO: Unable to read jessie_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.588: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.594: INFO: Unable to read jessie_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.598: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:12:55.643: INFO: Lookups using dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5397 wheezy_tcp@dns-test-service.dns-5397 wheezy_udp@dns-test-service.dns-5397.svc wheezy_tcp@dns-test-service.dns-5397.svc wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5397 jessie_tcp@dns-test-service.dns-5397 jessie_udp@dns-test-service.dns-5397.svc jessie_tcp@dns-test-service.dns-5397.svc jessie_udp@_http._tcp.dns-test-service.dns-5397.svc jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc]

Feb 26 01:13:00.655: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.662: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.670: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.677: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.686: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.693: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.701: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.708: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.757: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.764: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.770: INFO: Unable to read jessie_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.788: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.795: INFO: Unable to read jessie_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.801: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.808: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.815: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:00.893: INFO: Lookups using dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5397 wheezy_tcp@dns-test-service.dns-5397 wheezy_udp@dns-test-service.dns-5397.svc wheezy_tcp@dns-test-service.dns-5397.svc wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5397 jessie_tcp@dns-test-service.dns-5397 jessie_udp@dns-test-service.dns-5397.svc jessie_tcp@dns-test-service.dns-5397.svc jessie_udp@_http._tcp.dns-test-service.dns-5397.svc jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc]

Feb 26 01:13:05.656: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.664: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.668: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.673: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.683: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.687: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.691: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.719: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.723: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.727: INFO: Unable to read jessie_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.732: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.737: INFO: Unable to read jessie_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.741: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.746: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.750: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:05.777: INFO: Lookups using dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5397 wheezy_tcp@dns-test-service.dns-5397 wheezy_udp@dns-test-service.dns-5397.svc wheezy_tcp@dns-test-service.dns-5397.svc wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5397 jessie_tcp@dns-test-service.dns-5397 jessie_udp@dns-test-service.dns-5397.svc jessie_tcp@dns-test-service.dns-5397.svc jessie_udp@_http._tcp.dns-test-service.dns-5397.svc jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc]

Feb 26 01:13:10.653: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.661: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.666: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.671: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.676: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.713: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.716: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.719: INFO: Unable to read jessie_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.723: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.726: INFO: Unable to read jessie_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.729: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.734: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.744: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:10.789: INFO: Lookups using dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5397 wheezy_tcp@dns-test-service.dns-5397 wheezy_udp@dns-test-service.dns-5397.svc wheezy_tcp@dns-test-service.dns-5397.svc wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5397 jessie_tcp@dns-test-service.dns-5397 jessie_udp@dns-test-service.dns-5397.svc jessie_tcp@dns-test-service.dns-5397.svc jessie_udp@_http._tcp.dns-test-service.dns-5397.svc jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc]

Feb 26 01:13:15.657: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.664: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.670: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.674: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.680: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.720: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.732: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.738: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.773: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.777: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.786: INFO: Unable to read jessie_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.798: INFO: Unable to read jessie_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.808: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.815: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.826: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:15.921: INFO: Lookups using dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5397 wheezy_tcp@dns-test-service.dns-5397 wheezy_udp@dns-test-service.dns-5397.svc wheezy_tcp@dns-test-service.dns-5397.svc wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5397 jessie_tcp@dns-test-service.dns-5397 jessie_udp@dns-test-service.dns-5397.svc jessie_tcp@dns-test-service.dns-5397.svc jessie_udp@_http._tcp.dns-test-service.dns-5397.svc jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc]

Feb 26 01:13:20.652: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.656: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.659: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.662: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.664: INFO: Unable to read wheezy_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.668: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.672: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.675: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.694: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.697: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.699: INFO: Unable to read jessie_udp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.702: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397 from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.705: INFO: Unable to read jessie_udp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc from pod dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027: the server could not find the requested resource (get pods dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027)
Feb 26 01:13:20.732: INFO: Lookups using dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5397 wheezy_tcp@dns-test-service.dns-5397 wheezy_udp@dns-test-service.dns-5397.svc wheezy_tcp@dns-test-service.dns-5397.svc wheezy_udp@_http._tcp.dns-test-service.dns-5397.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5397.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5397 jessie_tcp@dns-test-service.dns-5397 jessie_udp@dns-test-service.dns-5397.svc jessie_tcp@dns-test-service.dns-5397.svc jessie_udp@_http._tcp.dns-test-service.dns-5397.svc jessie_tcp@_http._tcp.dns-test-service.dns-5397.svc]

Feb 26 01:13:25.819: INFO: DNS probes using dns-5397/dns-test-394d064e-0f2b-45b1-ba4a-f5330588e027 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:13:26.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5397" for this suite.

• [SLOW TEST:43.532 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":215,"skipped":3341,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:13:26.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:13:26.577: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.787087ms)
Feb 26 01:13:26.588: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.148377ms)
Feb 26 01:13:26.595: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.528221ms)
Feb 26 01:13:26.604: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.086912ms)
Feb 26 01:13:26.617: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.381592ms)
Feb 26 01:13:26.634: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.367894ms)
Feb 26 01:13:26.643: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.06896ms)
Feb 26 01:13:26.651: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.540766ms)
Feb 26 01:13:26.664: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.804439ms)
Feb 26 01:13:26.681: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.981802ms)
Feb 26 01:13:26.691: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.678038ms)
Feb 26 01:13:26.709: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.812203ms)
Feb 26 01:13:26.729: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.635509ms)
Feb 26 01:13:26.751: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.572436ms)
Feb 26 01:13:26.840: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 88.607443ms)
Feb 26 01:13:26.845: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.89198ms)
Feb 26 01:13:26.861: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.411159ms)
Feb 26 01:13:26.867: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.640901ms)
Feb 26 01:13:26.873: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.89036ms)
Feb 26 01:13:26.879: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.988173ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:13:26.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4722" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":216,"skipped":3358,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:13:26.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:13:27.063: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d" in namespace "security-context-test-1879" to be "success or failure"
Feb 26 01:13:27.164: INFO: Pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 100.961298ms
Feb 26 01:13:29.178: INFO: Pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114396022s
Feb 26 01:13:31.183: INFO: Pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119637786s
Feb 26 01:13:33.195: INFO: Pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131500889s
Feb 26 01:13:35.380: INFO: Pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316740335s
Feb 26 01:13:37.389: INFO: Pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.325494083s
Feb 26 01:13:39.403: INFO: Pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.339586321s
Feb 26 01:13:39.403: INFO: Pod "busybox-readonly-false-83d8d9c3-8d1c-4e87-8188-569e18b2bd1d" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:13:39.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1879" for this suite.

• [SLOW TEST:12.527 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":217,"skipped":3359,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:13:39.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1104.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1104.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1104.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1104.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1104.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1104.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 26 01:13:54.199: INFO: DNS probes using dns-1104/dns-test-f84860db-59fd-41f0-bf35-94882b058d2b succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:13:54.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1104" for this suite.

• [SLOW TEST:14.932 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":218,"skipped":3387,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:13:54.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 01:13:54.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3" in namespace "downward-api-6117" to be "success or failure"
Feb 26 01:13:54.578: INFO: Pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3": Phase="Pending", Reason="", readiness=false. Elapsed: 61.536166ms
Feb 26 01:13:56.592: INFO: Pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075340923s
Feb 26 01:13:58.986: INFO: Pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469140395s
Feb 26 01:14:01.001: INFO: Pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.483756502s
Feb 26 01:14:03.008: INFO: Pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491478976s
Feb 26 01:14:05.015: INFO: Pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.498165253s
Feb 26 01:14:07.020: INFO: Pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.502871225s
STEP: Saw pod success
Feb 26 01:14:07.020: INFO: Pod "downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3" satisfied condition "success or failure"
Feb 26 01:14:07.022: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3 container client-container: 
STEP: delete the pod
Feb 26 01:14:07.556: INFO: Waiting for pod downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3 to disappear
Feb 26 01:14:07.563: INFO: Pod downwardapi-volume-309640a6-796f-45b0-90d7-5ccf68ef69e3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:14:07.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6117" for this suite.

• [SLOW TEST:13.224 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":219,"skipped":3458,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:14:07.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 26 01:14:16.836: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:14:16.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2222" for this suite.

• [SLOW TEST:9.383 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":220,"skipped":3486,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:14:16.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 26 01:14:17.232: INFO: Waiting up to 5m0s for pod "pod-8fac9e6a-587c-4a3b-a816-eb02ef958509" in namespace "emptydir-507" to be "success or failure"
Feb 26 01:14:17.255: INFO: Pod "pod-8fac9e6a-587c-4a3b-a816-eb02ef958509": Phase="Pending", Reason="", readiness=false. Elapsed: 22.587467ms
Feb 26 01:14:19.267: INFO: Pod "pod-8fac9e6a-587c-4a3b-a816-eb02ef958509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035065674s
Feb 26 01:14:21.277: INFO: Pod "pod-8fac9e6a-587c-4a3b-a816-eb02ef958509": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044869294s
Feb 26 01:14:23.285: INFO: Pod "pod-8fac9e6a-587c-4a3b-a816-eb02ef958509": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052488474s
Feb 26 01:14:25.294: INFO: Pod "pod-8fac9e6a-587c-4a3b-a816-eb02ef958509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062036377s
STEP: Saw pod success
Feb 26 01:14:25.294: INFO: Pod "pod-8fac9e6a-587c-4a3b-a816-eb02ef958509" satisfied condition "success or failure"
Feb 26 01:14:25.297: INFO: Trying to get logs from node jerma-node pod pod-8fac9e6a-587c-4a3b-a816-eb02ef958509 container test-container: 
STEP: delete the pod
Feb 26 01:14:25.439: INFO: Waiting for pod pod-8fac9e6a-587c-4a3b-a816-eb02ef958509 to disappear
Feb 26 01:14:25.447: INFO: Pod pod-8fac9e6a-587c-4a3b-a816-eb02ef958509 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:14:25.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-507" for this suite.

• [SLOW TEST:8.502 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":221,"skipped":3501,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:14:25.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-040c1307-e7bc-476a-98b9-fb67b1b737f4
STEP: Creating a pod to test consume secrets
Feb 26 01:14:25.610: INFO: Waiting up to 5m0s for pod "pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843" in namespace "secrets-8758" to be "success or failure"
Feb 26 01:14:25.635: INFO: Pod "pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843": Phase="Pending", Reason="", readiness=false. Elapsed: 25.332844ms
Feb 26 01:14:27.715: INFO: Pod "pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105011532s
Feb 26 01:14:29.723: INFO: Pod "pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113238963s
Feb 26 01:14:31.731: INFO: Pod "pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121065081s
Feb 26 01:14:33.737: INFO: Pod "pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127475498s
STEP: Saw pod success
Feb 26 01:14:33.738: INFO: Pod "pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843" satisfied condition "success or failure"
Feb 26 01:14:33.747: INFO: Trying to get logs from node jerma-node pod pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843 container secret-volume-test: 
STEP: delete the pod
Feb 26 01:14:33.864: INFO: Waiting for pod pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843 to disappear
Feb 26 01:14:33.881: INFO: Pod pod-secrets-ea0a0834-5fa7-4a61-9fde-ca0c6b109843 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:14:33.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8758" for this suite.

• [SLOW TEST:8.462 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":222,"skipped":3506,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:14:33.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 26 01:14:34.832: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 26 01:14:36.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:14:38.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:14:40.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276474, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 26 01:14:43.901: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:14:43.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:14:45.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2960" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.781 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":223,"skipped":3522,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:14:45.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 26 01:14:46.223: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:15:01.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-168" for this suite.

• [SLOW TEST:15.429 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":224,"skipped":3556,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:15:01.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 26 01:15:11.833: INFO: Successfully updated pod "labelsupdate2037cdc7-c704-4111-be77-3a2356170dca"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:15:13.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6210" for this suite.

• [SLOW TEST:12.773 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":225,"skipped":3606,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:15:13.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:15:31.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-885" for this suite.

• [SLOW TEST:17.174 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":226,"skipped":3625,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:15:31.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 26 01:15:31.221: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-9190 /api/v1/namespaces/watch-9190/configmaps/e2e-watch-test-resource-version 7bf22b78-2c50-43df-ae6e-af2f4fef4565 10777380 0 2020-02-26 01:15:31 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 26 01:15:31.221: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-9190 /api/v1/namespaces/watch-9190/configmaps/e2e-watch-test-resource-version 7bf22b78-2c50-43df-ae6e-af2f4fef4565 10777381 0 2020-02-26 01:15:31 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:15:31.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9190" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":227,"skipped":3645,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:15:31.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-9f57cd55-626e-4982-8919-7558953e8de6
STEP: Creating a pod to test consume secrets
Feb 26 01:15:31.511: INFO: Waiting up to 5m0s for pod "pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d" in namespace "secrets-5640" to be "success or failure"
Feb 26 01:15:31.527: INFO: Pod "pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.165245ms
Feb 26 01:15:33.759: INFO: Pod "pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247433016s
Feb 26 01:15:35.767: INFO: Pod "pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255546985s
Feb 26 01:15:37.774: INFO: Pod "pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262585072s
Feb 26 01:15:39.790: INFO: Pod "pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278541712s
Feb 26 01:15:41.822: INFO: Pod "pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.310555816s
STEP: Saw pod success
Feb 26 01:15:41.823: INFO: Pod "pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d" satisfied condition "success or failure"
Feb 26 01:15:41.831: INFO: Trying to get logs from node jerma-node pod pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d container secret-env-test: 
STEP: delete the pod
Feb 26 01:15:41.935: INFO: Waiting for pod pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d to disappear
Feb 26 01:15:41.944: INFO: Pod pod-secrets-b335d5f8-6fa4-4be2-93de-164cbc39cc9d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:15:41.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5640" for this suite.

• [SLOW TEST:10.718 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":228,"skipped":3661,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:15:41.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:15:42.084: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 26 01:15:42.109: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 26 01:15:47.157: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 26 01:15:51.169: INFO: Creating deployment "test-rolling-update-deployment"
Feb 26 01:15:51.176: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 26 01:15:51.226: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 26 01:15:53.237: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 26 01:15:53.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:15:55.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:15:57.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276551, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:15:59.248: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 26 01:15:59.262: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-5743 /apis/apps/v1/namespaces/deployment-5743/deployments/test-rolling-update-deployment fccd7b1a-7cbd-474e-8d43-65513cff20c0 10777523 1 2020-02-26 01:15:51 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005803168  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-26 01:15:51 +0000 UTC,LastTransitionTime:2020-02-26 01:15:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-26 01:15:57 +0000 UTC,LastTransitionTime:2020-02-26 01:15:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 26 01:15:59.267: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-5743 /apis/apps/v1/namespaces/deployment-5743/replicasets/test-rolling-update-deployment-67cf4f6444 a58b58d5-6ece-4605-8b2f-f3e61a072fe7 10777513 1 2020-02-26 01:15:51 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment fccd7b1a-7cbd-474e-8d43-65513cff20c0 0xc005803607 0xc005803608}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005803678  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 26 01:15:59.267: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 26 01:15:59.267: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-5743 /apis/apps/v1/namespaces/deployment-5743/replicasets/test-rolling-update-controller 6309c6fa-c034-4688-a583-1f64333b502f 10777522 2 2020-02-26 01:15:42 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment fccd7b1a-7cbd-474e-8d43-65513cff20c0 0xc005803537 0xc005803538}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005803598  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 26 01:15:59.273: INFO: Pod "test-rolling-update-deployment-67cf4f6444-54v4b" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-54v4b test-rolling-update-deployment-67cf4f6444- deployment-5743 /api/v1/namespaces/deployment-5743/pods/test-rolling-update-deployment-67cf4f6444-54v4b 4aec0b86-28db-4488-899f-341a5ed2d696 10777512 0 2020-02-26 01:15:51 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 a58b58d5-6ece-4605-8b2f-f3e61a072fe7 0xc005803b77 0xc005803b78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d44m2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d44m2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d44m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:15:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:15:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:15:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:15:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-26 01:15:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:15:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://466d3b4a672828e2f4bc3c40d2582d4abfd1b607de0caa828b02bd4237ebe385,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:15:59.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5743" for this suite.

• [SLOW TEST:17.337 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":229,"skipped":3713,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:15:59.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:16:10.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-356" for this suite.

• [SLOW TEST:11.247 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":230,"skipped":3713,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:16:10.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:16:20.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-208" for this suite.

• [SLOW TEST:10.427 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":231,"skipped":3753,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:16:20.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 26 01:16:30.351: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:16:30.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-344" for this suite.

• [SLOW TEST:9.491 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":232,"skipped":3794,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:16:30.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-5d6fdab8-20bb-465c-8a99-abd2a2e2ee12
STEP: Creating a pod to test consume secrets
Feb 26 01:16:30.749: INFO: Waiting up to 5m0s for pod "pod-secrets-71405559-3927-42bf-894d-3a3675a82979" in namespace "secrets-8722" to be "success or failure"
Feb 26 01:16:30.835: INFO: Pod "pod-secrets-71405559-3927-42bf-894d-3a3675a82979": Phase="Pending", Reason="", readiness=false. Elapsed: 85.703393ms
Feb 26 01:16:32.844: INFO: Pod "pod-secrets-71405559-3927-42bf-894d-3a3675a82979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094126153s
Feb 26 01:16:34.859: INFO: Pod "pod-secrets-71405559-3927-42bf-894d-3a3675a82979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109841654s
Feb 26 01:16:36.874: INFO: Pod "pod-secrets-71405559-3927-42bf-894d-3a3675a82979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12416672s
Feb 26 01:16:38.879: INFO: Pod "pod-secrets-71405559-3927-42bf-894d-3a3675a82979": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12965711s
Feb 26 01:16:40.889: INFO: Pod "pod-secrets-71405559-3927-42bf-894d-3a3675a82979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139124432s
STEP: Saw pod success
Feb 26 01:16:40.889: INFO: Pod "pod-secrets-71405559-3927-42bf-894d-3a3675a82979" satisfied condition "success or failure"
Feb 26 01:16:40.893: INFO: Trying to get logs from node jerma-node pod pod-secrets-71405559-3927-42bf-894d-3a3675a82979 container secret-volume-test: 
STEP: delete the pod
Feb 26 01:16:40.948: INFO: Waiting for pod pod-secrets-71405559-3927-42bf-894d-3a3675a82979 to disappear
Feb 26 01:16:40.953: INFO: Pod pod-secrets-71405559-3927-42bf-894d-3a3675a82979 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:16:40.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8722" for this suite.

• [SLOW TEST:10.504 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":233,"skipped":3826,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:16:40.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-54f58feb-4f0a-43e1-9e5c-ca77bff94863
STEP: Creating a pod to test consume secrets
Feb 26 01:16:43.434: INFO: Waiting up to 5m0s for pod "pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98" in namespace "secrets-4120" to be "success or failure"
Feb 26 01:16:43.447: INFO: Pod "pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 13.639866ms
Feb 26 01:16:45.463: INFO: Pod "pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029119083s
Feb 26 01:16:47.471: INFO: Pod "pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036850378s
Feb 26 01:16:49.479: INFO: Pod "pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044715401s
Feb 26 01:16:51.485: INFO: Pod "pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051103617s
Feb 26 01:16:53.493: INFO: Pod "pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059149261s
STEP: Saw pod success
Feb 26 01:16:53.493: INFO: Pod "pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98" satisfied condition "success or failure"
Feb 26 01:16:53.498: INFO: Trying to get logs from node jerma-node pod pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98 container secret-volume-test: 
STEP: delete the pod
Feb 26 01:16:53.549: INFO: Waiting for pod pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98 to disappear
Feb 26 01:16:53.565: INFO: Pod pod-secrets-390e4106-a8b8-4267-80ea-fa226b89ad98 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:16:53.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4120" for this suite.
STEP: Destroying namespace "secret-namespace-9257" for this suite.

• [SLOW TEST:12.621 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3828,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:16:53.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:17:05.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3662" for this suite.

• [SLOW TEST:11.731 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":235,"skipped":3833,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:17:05.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3503
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-3503
I0226 01:17:05.550383       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3503, replica count: 2
I0226 01:17:08.601800       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 01:17:11.602382       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 01:17:14.602901       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 01:17:17.603521       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 01:17:20.604273       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 26 01:17:20.604: INFO: Creating new exec pod
Feb 26 01:17:29.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3503 execpodftrmn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 26 01:17:32.750: INFO: stderr: "I0226 01:17:32.513614    3966 log.go:172] (0xc000551340) (0xc0006cfea0) Create stream\nI0226 01:17:32.513768    3966 log.go:172] (0xc000551340) (0xc0006cfea0) Stream added, broadcasting: 1\nI0226 01:17:32.521117    3966 log.go:172] (0xc000551340) Reply frame received for 1\nI0226 01:17:32.521290    3966 log.go:172] (0xc000551340) (0xc0006cff40) Create stream\nI0226 01:17:32.521319    3966 log.go:172] (0xc000551340) (0xc0006cff40) Stream added, broadcasting: 3\nI0226 01:17:32.523244    3966 log.go:172] (0xc000551340) Reply frame received for 3\nI0226 01:17:32.523306    3966 log.go:172] (0xc000551340) (0xc00057a780) Create stream\nI0226 01:17:32.523316    3966 log.go:172] (0xc000551340) (0xc00057a780) Stream added, broadcasting: 5\nI0226 01:17:32.527043    3966 log.go:172] (0xc000551340) Reply frame received for 5\nI0226 01:17:32.649010    3966 log.go:172] (0xc000551340) Data frame received for 5\nI0226 01:17:32.649079    3966 log.go:172] (0xc00057a780) (5) Data frame handling\nI0226 01:17:32.649113    3966 log.go:172] (0xc00057a780) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0226 01:17:32.655712    3966 log.go:172] (0xc000551340) Data frame received for 5\nI0226 01:17:32.655725    3966 log.go:172] (0xc00057a780) (5) Data frame handling\nI0226 01:17:32.655737    3966 log.go:172] (0xc00057a780) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0226 01:17:32.733798    3966 log.go:172] (0xc000551340) Data frame received for 1\nI0226 01:17:32.733934    3966 log.go:172] (0xc000551340) (0xc0006cff40) Stream removed, broadcasting: 3\nI0226 01:17:32.734049    3966 log.go:172] (0xc0006cfea0) (1) Data frame handling\nI0226 01:17:32.734085    3966 log.go:172] (0xc0006cfea0) (1) Data frame sent\nI0226 01:17:32.734200    3966 log.go:172] (0xc000551340) (0xc00057a780) Stream removed, broadcasting: 5\nI0226 01:17:32.734284    3966 log.go:172] (0xc000551340) (0xc0006cfea0) Stream removed, broadcasting: 1\nI0226 01:17:32.734330    3966 log.go:172] (0xc000551340) Go away received\nI0226 01:17:32.736378    3966 log.go:172] (0xc000551340) (0xc0006cfea0) Stream removed, broadcasting: 1\nI0226 01:17:32.736402    3966 log.go:172] (0xc000551340) (0xc0006cff40) Stream removed, broadcasting: 3\nI0226 01:17:32.736415    3966 log.go:172] (0xc000551340) (0xc00057a780) Stream removed, broadcasting: 5\n"
Feb 26 01:17:32.751: INFO: stdout: ""
Feb 26 01:17:32.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3503 execpodftrmn -- /bin/sh -x -c nc -zv -t -w 2 10.96.157.253 80'
Feb 26 01:17:33.076: INFO: stderr: "I0226 01:17:32.878914    3988 log.go:172] (0xc000a09810) (0xc0008d08c0) Create stream\nI0226 01:17:32.879020    3988 log.go:172] (0xc000a09810) (0xc0008d08c0) Stream added, broadcasting: 1\nI0226 01:17:32.888273    3988 log.go:172] (0xc000a09810) Reply frame received for 1\nI0226 01:17:32.888356    3988 log.go:172] (0xc000a09810) (0xc0005e2780) Create stream\nI0226 01:17:32.888373    3988 log.go:172] (0xc000a09810) (0xc0005e2780) Stream added, broadcasting: 3\nI0226 01:17:32.890052    3988 log.go:172] (0xc000a09810) Reply frame received for 3\nI0226 01:17:32.890075    3988 log.go:172] (0xc000a09810) (0xc0003c9400) Create stream\nI0226 01:17:32.890080    3988 log.go:172] (0xc000a09810) (0xc0003c9400) Stream added, broadcasting: 5\nI0226 01:17:32.891283    3988 log.go:172] (0xc000a09810) Reply frame received for 5\nI0226 01:17:32.974636    3988 log.go:172] (0xc000a09810) Data frame received for 5\nI0226 01:17:32.975186    3988 log.go:172] (0xc0003c9400) (5) Data frame handling\nI0226 01:17:32.975368    3988 log.go:172] (0xc0003c9400) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.157.253 80\nConnection to 10.96.157.253 80 port [tcp/http] succeeded!\nI0226 01:17:33.058209    3988 log.go:172] (0xc000a09810) Data frame received for 1\nI0226 01:17:33.058374    3988 log.go:172] (0xc0008d08c0) (1) Data frame handling\nI0226 01:17:33.058434    3988 log.go:172] (0xc0008d08c0) (1) Data frame sent\nI0226 01:17:33.059360    3988 log.go:172] (0xc000a09810) (0xc0003c9400) Stream removed, broadcasting: 5\nI0226 01:17:33.059505    3988 log.go:172] (0xc000a09810) (0xc0005e2780) Stream removed, broadcasting: 3\nI0226 01:17:33.059574    3988 log.go:172] (0xc000a09810) (0xc0008d08c0) Stream removed, broadcasting: 1\nI0226 01:17:33.059614    3988 log.go:172] (0xc000a09810) Go away received\nI0226 01:17:33.060868    3988 log.go:172] (0xc000a09810) (0xc0008d08c0) Stream removed, broadcasting: 1\nI0226 01:17:33.060881    3988 log.go:172] (0xc000a09810) (0xc0005e2780) Stream removed, broadcasting: 3\nI0226 01:17:33.060895    3988 log.go:172] (0xc000a09810) (0xc0003c9400) Stream removed, broadcasting: 5\n"
Feb 26 01:17:33.076: INFO: stdout: ""
Feb 26 01:17:33.076: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:17:33.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3503" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:27.929 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":236,"skipped":3841,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:17:33.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:17:33.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 26 01:17:36.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7085 create -f -'
Feb 26 01:17:40.685: INFO: stderr: ""
Feb 26 01:17:40.685: INFO: stdout: "e2e-test-crd-publish-openapi-1834-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 26 01:17:40.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7085 delete e2e-test-crd-publish-openapi-1834-crds test-cr'
Feb 26 01:17:40.906: INFO: stderr: ""
Feb 26 01:17:40.906: INFO: stdout: "e2e-test-crd-publish-openapi-1834-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Feb 26 01:17:40.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7085 apply -f -'
Feb 26 01:17:41.299: INFO: stderr: ""
Feb 26 01:17:41.300: INFO: stdout: "e2e-test-crd-publish-openapi-1834-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Feb 26 01:17:41.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7085 delete e2e-test-crd-publish-openapi-1834-crds test-cr'
Feb 26 01:17:41.450: INFO: stderr: ""
Feb 26 01:17:41.451: INFO: stdout: "e2e-test-crd-publish-openapi-1834-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Feb 26 01:17:41.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1834-crds'
Feb 26 01:17:41.818: INFO: stderr: ""
Feb 26 01:17:41.818: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1834-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:17:44.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7085" for this suite.

• [SLOW TEST:11.828 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":237,"skipped":3853,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:17:45.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 26 01:17:45.363: INFO: Waiting up to 5m0s for pod "pod-070b96cd-d645-4f77-b89f-bd6b18f373e0" in namespace "emptydir-3454" to be "success or failure"
Feb 26 01:17:45.482: INFO: Pod "pod-070b96cd-d645-4f77-b89f-bd6b18f373e0": Phase="Pending", Reason="", readiness=false. Elapsed: 119.108813ms
Feb 26 01:17:47.904: INFO: Pod "pod-070b96cd-d645-4f77-b89f-bd6b18f373e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540930955s
Feb 26 01:17:49.946: INFO: Pod "pod-070b96cd-d645-4f77-b89f-bd6b18f373e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.582861381s
Feb 26 01:17:51.953: INFO: Pod "pod-070b96cd-d645-4f77-b89f-bd6b18f373e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.590155189s
Feb 26 01:17:53.971: INFO: Pod "pod-070b96cd-d645-4f77-b89f-bd6b18f373e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607796199s
Feb 26 01:17:55.976: INFO: Pod "pod-070b96cd-d645-4f77-b89f-bd6b18f373e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.613108967s
STEP: Saw pod success
Feb 26 01:17:55.976: INFO: Pod "pod-070b96cd-d645-4f77-b89f-bd6b18f373e0" satisfied condition "success or failure"
Feb 26 01:17:55.980: INFO: Trying to get logs from node jerma-node pod pod-070b96cd-d645-4f77-b89f-bd6b18f373e0 container test-container: 
STEP: delete the pod
Feb 26 01:17:56.330: INFO: Waiting for pod pod-070b96cd-d645-4f77-b89f-bd6b18f373e0 to disappear
Feb 26 01:17:56.340: INFO: Pod pod-070b96cd-d645-4f77-b89f-bd6b18f373e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:17:56.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3454" for this suite.

• [SLOW TEST:11.276 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":238,"skipped":3853,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:17:56.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 26 01:18:03.857: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:18:03.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5957" for this suite.

• [SLOW TEST:7.608 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":239,"skipped":3866,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:18:03.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-6tx5
STEP: Creating a pod to test atomic-volume-subpath
Feb 26 01:18:04.151: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6tx5" in namespace "subpath-471" to be "success or failure"
Feb 26 01:18:04.252: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Pending", Reason="", readiness=false. Elapsed: 100.574577ms
Feb 26 01:18:06.258: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1067485s
Feb 26 01:18:08.265: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113685993s
Feb 26 01:18:10.280: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129243632s
Feb 26 01:18:12.300: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 8.149261214s
Feb 26 01:18:14.307: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 10.156083919s
Feb 26 01:18:16.315: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 12.163993101s
Feb 26 01:18:18.322: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 14.170799049s
Feb 26 01:18:20.330: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 16.179255055s
Feb 26 01:18:22.338: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 18.187409101s
Feb 26 01:18:24.344: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 20.193413944s
Feb 26 01:18:26.355: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 22.204387611s
Feb 26 01:18:28.364: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 24.213299916s
Feb 26 01:18:30.371: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 26.219477705s
Feb 26 01:18:32.375: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Running", Reason="", readiness=true. Elapsed: 28.224113397s
Feb 26 01:18:34.382: INFO: Pod "pod-subpath-test-configmap-6tx5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.231239714s
STEP: Saw pod success
Feb 26 01:18:34.383: INFO: Pod "pod-subpath-test-configmap-6tx5" satisfied condition "success or failure"
Feb 26 01:18:34.386: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-6tx5 container test-container-subpath-configmap-6tx5: 
STEP: delete the pod
Feb 26 01:18:34.466: INFO: Waiting for pod pod-subpath-test-configmap-6tx5 to disappear
Feb 26 01:18:34.471: INFO: Pod pod-subpath-test-configmap-6tx5 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6tx5
Feb 26 01:18:34.472: INFO: Deleting pod "pod-subpath-test-configmap-6tx5" in namespace "subpath-471"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:18:34.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-471" for this suite.

• [SLOW TEST:30.525 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":240,"skipped":3870,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:18:34.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 26 01:18:42.742: INFO: 10 pods remaining
Feb 26 01:18:42.742: INFO: 10 pods has nil DeletionTimestamp
Feb 26 01:18:42.742: INFO: 
Feb 26 01:18:43.294: INFO: 0 pods remaining
Feb 26 01:18:43.295: INFO: 0 pods has nil DeletionTimestamp
Feb 26 01:18:43.295: INFO: 
STEP: Gathering metrics
W0226 01:18:44.332425       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 01:18:44.332: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:18:44.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-802" for this suite.

• [SLOW TEST:9.877 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":241,"skipped":3870,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:18:44.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Feb 26 01:18:46.006: INFO: created pod pod-service-account-defaultsa
Feb 26 01:18:46.006: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 26 01:18:46.087: INFO: created pod pod-service-account-mountsa
Feb 26 01:18:46.088: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 26 01:18:46.675: INFO: created pod pod-service-account-nomountsa
Feb 26 01:18:46.675: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 26 01:18:46.687: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 26 01:18:46.687: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 26 01:18:48.690: INFO: created pod pod-service-account-mountsa-mountspec
Feb 26 01:18:48.690: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 26 01:18:49.191: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 26 01:18:49.191: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 26 01:18:49.282: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 26 01:18:49.282: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 26 01:18:49.447: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 26 01:18:49.447: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 26 01:18:49.763: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 26 01:18:49.763: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:18:49.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3643" for this suite.

• [SLOW TEST:8.282 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":280,"completed":242,"skipped":3874,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:18:52.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-129359da-48d8-4e2a-ad47-32f91a18c3aa
STEP: Creating secret with name s-test-opt-upd-ed6bac7a-9dee-4882-b1de-5d77489e77b2
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-129359da-48d8-4e2a-ad47-32f91a18c3aa
STEP: Updating secret s-test-opt-upd-ed6bac7a-9dee-4882-b1de-5d77489e77b2
STEP: Creating secret with name s-test-opt-create-1ca0b29c-d41e-4614-b084-2257af722be6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:21:11.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6365" for this suite.

• [SLOW TEST:138.553 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":243,"skipped":3877,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:21:11.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 26 01:21:21.842: INFO: Successfully updated pod "labelsupdate1061a54e-7145-4675-bd1d-fb1055d461d9"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:21:23.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2230" for this suite.

• [SLOW TEST:12.765 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":244,"skipped":3882,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:21:23.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 26 01:21:24.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-1575'
Feb 26 01:21:24.257: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 26 01:21:24.257: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740
Feb 26 01:21:26.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1575'
Feb 26 01:21:26.687: INFO: stderr: ""
Feb 26 01:21:26.687: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:21:26.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1575" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":280,"completed":245,"skipped":3895,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:21:26.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:21:26.981: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 26 01:21:27.069: INFO: Number of nodes with available pods: 0
Feb 26 01:21:27.069: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:28.085: INFO: Number of nodes with available pods: 0
Feb 26 01:21:28.085: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:29.835: INFO: Number of nodes with available pods: 0
Feb 26 01:21:29.835: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:30.622: INFO: Number of nodes with available pods: 0
Feb 26 01:21:30.623: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:31.698: INFO: Number of nodes with available pods: 0
Feb 26 01:21:31.698: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:32.344: INFO: Number of nodes with available pods: 0
Feb 26 01:21:32.344: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:33.086: INFO: Number of nodes with available pods: 0
Feb 26 01:21:33.086: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:36.588: INFO: Number of nodes with available pods: 0
Feb 26 01:21:36.588: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:37.456: INFO: Number of nodes with available pods: 0
Feb 26 01:21:37.456: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:38.081: INFO: Number of nodes with available pods: 0
Feb 26 01:21:38.081: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:39.177: INFO: Number of nodes with available pods: 0
Feb 26 01:21:39.177: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:40.082: INFO: Number of nodes with available pods: 0
Feb 26 01:21:40.082: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:41.078: INFO: Number of nodes with available pods: 1
Feb 26 01:21:41.078: INFO: Node jerma-node is running more than one daemon pod
Feb 26 01:21:42.093: INFO: Number of nodes with available pods: 2
Feb 26 01:21:42.093: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 26 01:21:42.225: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:42.225: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:43.252: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:43.252: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:44.241: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:44.241: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:45.276: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:45.276: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:46.242: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:46.242: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:47.245: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:47.245: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:48.244: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:48.244: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:49.242: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:49.243: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:49.243: INFO: Pod daemon-set-zfs4k is not available
Feb 26 01:21:50.245: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:50.245: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:50.245: INFO: Pod daemon-set-zfs4k is not available
Feb 26 01:21:51.243: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:51.243: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:51.243: INFO: Pod daemon-set-zfs4k is not available
Feb 26 01:21:52.240: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:52.240: INFO: Wrong image for pod: daemon-set-zfs4k. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:52.240: INFO: Pod daemon-set-zfs4k is not available
Feb 26 01:21:53.242: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:53.242: INFO: Pod daemon-set-fx7cq is not available
Feb 26 01:21:54.242: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:54.242: INFO: Pod daemon-set-fx7cq is not available
Feb 26 01:21:55.247: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:55.248: INFO: Pod daemon-set-fx7cq is not available
Feb 26 01:21:56.245: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:56.245: INFO: Pod daemon-set-fx7cq is not available
Feb 26 01:21:57.241: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:57.241: INFO: Pod daemon-set-fx7cq is not available
Feb 26 01:21:58.245: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:58.245: INFO: Pod daemon-set-fx7cq is not available
Feb 26 01:21:59.252: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:21:59.252: INFO: Pod daemon-set-fx7cq is not available
Feb 26 01:22:00.247: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:01.241: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:02.250: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:03.244: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:04.245: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:04.245: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:05.243: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:05.243: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:06.245: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:06.246: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:07.242: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:07.242: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:08.246: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:08.246: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:09.244: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:09.244: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:10.246: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:10.246: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:11.243: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:11.243: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:12.243: INFO: Wrong image for pod: daemon-set-6lcbs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 26 01:22:12.243: INFO: Pod daemon-set-6lcbs is not available
Feb 26 01:22:13.249: INFO: Pod daemon-set-w2hlj is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 26 01:22:13.269: INFO: Number of nodes with available pods: 1
Feb 26 01:22:13.269: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:16.167: INFO: Number of nodes with available pods: 1
Feb 26 01:22:16.167: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:16.494: INFO: Number of nodes with available pods: 1
Feb 26 01:22:16.494: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:17.299: INFO: Number of nodes with available pods: 1
Feb 26 01:22:17.299: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:18.278: INFO: Number of nodes with available pods: 1
Feb 26 01:22:18.278: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:20.171: INFO: Number of nodes with available pods: 1
Feb 26 01:22:20.172: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:20.886: INFO: Number of nodes with available pods: 1
Feb 26 01:22:20.886: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:21.281: INFO: Number of nodes with available pods: 1
Feb 26 01:22:21.282: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:22.282: INFO: Number of nodes with available pods: 1
Feb 26 01:22:22.282: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 26 01:22:23.297: INFO: Number of nodes with available pods: 2
Feb 26 01:22:23.297: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7585, will wait for the garbage collector to delete the pods
Feb 26 01:22:23.395: INFO: Deleting DaemonSet.extensions daemon-set took: 17.520273ms
Feb 26 01:22:23.696: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.469056ms
Feb 26 01:22:33.234: INFO: Number of nodes with available pods: 0
Feb 26 01:22:33.235: INFO: Number of running nodes: 0, number of available pods: 0
Feb 26 01:22:33.238: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7585/daemonsets","resourceVersion":"10779063"},"items":null}

Feb 26 01:22:33.242: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7585/pods","resourceVersion":"10779063"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:22:33.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7585" for this suite.

• [SLOW TEST:66.501 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":246,"skipped":3937,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:22:33.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 26 01:22:34.008: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 26 01:22:36.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:22:38.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:22:40.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:22:42.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718276954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 26 01:22:45.062: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
Feb 26 01:22:45.124: INFO: Waiting for webhook configuration to be ready...
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:22:45.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7902" for this suite.
STEP: Destroying namespace "webhook-7902-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.222 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":247,"skipped":3946,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:22:45.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 26 01:22:45.572: INFO: Waiting up to 5m0s for pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4" in namespace "emptydir-9708" to be "success or failure"
Feb 26 01:22:45.577: INFO: Pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206492ms
Feb 26 01:22:47.585: INFO: Pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012030745s
Feb 26 01:22:49.597: INFO: Pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024629405s
Feb 26 01:22:51.637: INFO: Pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064082695s
Feb 26 01:22:53.646: INFO: Pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073776799s
Feb 26 01:22:55.655: INFO: Pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082835547s
Feb 26 01:22:57.660: INFO: Pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.088020608s
STEP: Saw pod success
Feb 26 01:22:57.661: INFO: Pod "pod-77321e73-653b-4e94-a1ce-9710dd7670d4" satisfied condition "success or failure"
Feb 26 01:22:57.665: INFO: Trying to get logs from node jerma-node pod pod-77321e73-653b-4e94-a1ce-9710dd7670d4 container test-container: 
STEP: delete the pod
Feb 26 01:22:57.751: INFO: Waiting for pod pod-77321e73-653b-4e94-a1ce-9710dd7670d4 to disappear
Feb 26 01:22:57.756: INFO: Pod pod-77321e73-653b-4e94-a1ce-9710dd7670d4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:22:57.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9708" for this suite.

• [SLOW TEST:12.367 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":248,"skipped":3975,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:22:57.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4243
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4243
STEP: Creating statefulset with conflicting port in namespace statefulset-4243
STEP: Waiting until pod test-pod will start running in namespace statefulset-4243
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4243
Feb 26 01:23:12.281: INFO: Observed stateful pod in namespace: statefulset-4243, name: ss-0, uid: e05814e8-b778-412e-955b-23997aede184, status phase: Pending. Waiting for statefulset controller to delete.
Feb 26 01:23:12.311: INFO: Observed stateful pod in namespace: statefulset-4243, name: ss-0, uid: e05814e8-b778-412e-955b-23997aede184, status phase: Failed. Waiting for statefulset controller to delete.
Feb 26 01:23:12.343: INFO: Observed stateful pod in namespace: statefulset-4243, name: ss-0, uid: e05814e8-b778-412e-955b-23997aede184, status phase: Failed. Waiting for statefulset controller to delete.
Feb 26 01:23:12.353: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4243
STEP: Removing pod with conflicting port in namespace statefulset-4243
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4243 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 26 01:23:22.532: INFO: Deleting all statefulset in ns statefulset-4243
Feb 26 01:23:22.537: INFO: Scaling statefulset ss to 0
Feb 26 01:23:32.580: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 01:23:32.584: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:23:32.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4243" for this suite.

• [SLOW TEST:34.756 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":249,"skipped":3983,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:23:32.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb 26 01:23:32.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb 26 01:23:46.039: INFO: >>> kubeConfig: /root/.kube/config
Feb 26 01:23:49.045: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:24:01.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1929" for this suite.

• [SLOW TEST:28.540 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":250,"skipped":3984,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:24:01.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-4ecbf220-96fa-4d8d-9e31-effd1122e80d
STEP: Creating a pod to test consume configMaps
Feb 26 01:24:01.289: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24" in namespace "projected-8184" to be "success or failure"
Feb 26 01:24:01.320: INFO: Pod "pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24": Phase="Pending", Reason="", readiness=false. Elapsed: 31.098135ms
Feb 26 01:24:03.335: INFO: Pod "pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046233351s
Feb 26 01:24:05.344: INFO: Pod "pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055167431s
Feb 26 01:24:07.352: INFO: Pod "pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063028296s
Feb 26 01:24:09.361: INFO: Pod "pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072053332s
STEP: Saw pod success
Feb 26 01:24:09.362: INFO: Pod "pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24" satisfied condition "success or failure"
Feb 26 01:24:09.376: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 01:24:09.858: INFO: Waiting for pod pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24 to disappear
Feb 26 01:24:09.870: INFO: Pod pod-projected-configmaps-4b5c7a27-a9a9-467e-83eb-04b39f230a24 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:24:09.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8184" for this suite.

• [SLOW TEST:8.776 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":251,"skipped":3984,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:24:09.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-95435473-47ac-427a-bbff-8b16aee1b19b
STEP: Creating configMap with name cm-test-opt-upd-8600c7a2-3b1a-4ff2-a23e-ef1ae4022488
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-95435473-47ac-427a-bbff-8b16aee1b19b
STEP: Updating configmap cm-test-opt-upd-8600c7a2-3b1a-4ff2-a23e-ef1ae4022488
STEP: Creating configMap with name cm-test-opt-create-875a2020-ee7a-4c49-b4b5-fbff0171add5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:25:54.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-183" for this suite.

• [SLOW TEST:104.930 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":252,"skipped":3991,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:25:54.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:25:54.974: INFO: Creating ReplicaSet my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac
Feb 26 01:25:54.989: INFO: Pod name my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac: Found 0 pods out of 1
Feb 26 01:25:59.996: INFO: Pod name my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac: Found 1 pods out of 1
Feb 26 01:25:59.997: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac" is running
Feb 26 01:26:06.124: INFO: Pod "my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac-fg5vt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 01:25:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 01:25:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 01:25:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 01:25:55 +0000 UTC Reason: Message:}])
Feb 26 01:26:06.124: INFO: Trying to dial the pod
Feb 26 01:26:11.144: INFO: Controller my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac: Got expected result from replica 1 [my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac-fg5vt]: "my-hostname-basic-270f7b6d-f9a0-4ab8-933a-e3e8857641ac-fg5vt", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:26:11.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5662" for this suite.

• [SLOW TEST:16.295 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":253,"skipped":3996,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:26:11.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 01:26:11.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82" in namespace "downward-api-9692" to be "success or failure"
Feb 26 01:26:11.355: INFO: Pod "downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82": Phase="Pending", Reason="", readiness=false. Elapsed: 32.960742ms
Feb 26 01:26:13.363: INFO: Pod "downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041183642s
Feb 26 01:26:15.370: INFO: Pod "downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048596418s
Feb 26 01:26:17.375: INFO: Pod "downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053220567s
Feb 26 01:26:19.388: INFO: Pod "downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066506822s
STEP: Saw pod success
Feb 26 01:26:19.389: INFO: Pod "downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82" satisfied condition "success or failure"
Feb 26 01:26:19.395: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82 container client-container: 
STEP: delete the pod
Feb 26 01:26:19.468: INFO: Waiting for pod downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82 to disappear
Feb 26 01:26:19.501: INFO: Pod downwardapi-volume-1c0e4b7b-3d7b-469c-810e-e841e6380e82 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:26:19.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9692" for this suite.

• [SLOW TEST:8.343 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":254,"skipped":4106,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:26:19.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 26 01:26:19.783: INFO: Waiting up to 5m0s for pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8" in namespace "emptydir-1918" to be "success or failure"
Feb 26 01:26:19.837: INFO: Pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 54.049563ms
Feb 26 01:26:21.847: INFO: Pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063126231s
Feb 26 01:26:23.859: INFO: Pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075975035s
Feb 26 01:26:25.872: INFO: Pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088862421s
Feb 26 01:26:27.881: INFO: Pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097663623s
Feb 26 01:26:29.886: INFO: Pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103035378s
Feb 26 01:26:31.896: INFO: Pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.112087584s
STEP: Saw pod success
Feb 26 01:26:31.896: INFO: Pod "pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8" satisfied condition "success or failure"
Feb 26 01:26:31.899: INFO: Trying to get logs from node jerma-node pod pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8 container test-container: 
STEP: delete the pod
Feb 26 01:26:31.933: INFO: Waiting for pod pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8 to disappear
Feb 26 01:26:31.964: INFO: Pod pod-f76f9c1b-86d0-442f-8d38-d799b51b2fb8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:26:31.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1918" for this suite.

• [SLOW TEST:12.467 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":255,"skipped":4133,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:26:31.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:26:32.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:26:44.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6305" for this suite.

• [SLOW TEST:12.433 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":256,"skipped":4147,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:26:44.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:26:44.531: INFO: Creating deployment "webserver-deployment"
Feb 26 01:26:44.673: INFO: Waiting for observed generation 1
Feb 26 01:26:48.938: INFO: Waiting for all required pods to come up
Feb 26 01:26:51.470: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 26 01:27:18.199: INFO: Waiting for deployment "webserver-deployment" to complete
Feb 26 01:27:18.210: INFO: Updating deployment "webserver-deployment" with a non-existent image
Feb 26 01:27:18.231: INFO: Updating deployment webserver-deployment
Feb 26 01:27:18.231: INFO: Waiting for observed generation 2
Feb 26 01:27:20.518: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 26 01:27:20.533: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 26 01:27:20.542: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 26 01:27:21.438: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 26 01:27:21.439: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 26 01:27:21.449: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 26 01:27:21.708: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Feb 26 01:27:21.708: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Feb 26 01:27:21.719: INFO: Updating deployment webserver-deployment
Feb 26 01:27:21.719: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Feb 26 01:27:23.258: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 26 01:27:31.387: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 26 01:27:33.849: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-3141 /apis/apps/v1/namespaces/deployment-3141/deployments/webserver-deployment 164cfdce-61ae-4ed1-80b2-56e3cee501c9 10780372 3 2020-02-26 01:26:44 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028d5548  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-26 01:27:18 +0000 UTC,LastTransitionTime:2020-02-26 01:26:44 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-26 01:27:22 +0000 UTC,LastTransitionTime:2020-02-26 01:27:22 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Feb 26 01:27:34.980: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-3141 /apis/apps/v1/namespaces/deployment-3141/replicasets/webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 10780374 3 2020-02-26 01:27:18 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 164cfdce-61ae-4ed1-80b2-56e3cee501c9 0xc004c1eeb7 0xc004c1eeb8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c1ef28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 26 01:27:34.980: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Feb 26 01:27:34.980: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-3141 /apis/apps/v1/namespaces/deployment-3141/replicasets/webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 10780359 3 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 164cfdce-61ae-4ed1-80b2-56e3cee501c9 0xc004c1edf7 0xc004c1edf8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c1ee58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Feb 26 01:27:37.120: INFO: Pod "webserver-deployment-595b5b9587-4thhx" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4thhx webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-4thhx ae17255c-c68b-4cce-aa0c-eaa1cd789249 10780225 0 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc004c1f647 0xc004c1f648}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-02-26 01:26:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:27:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f03ebbea3f32200dda1a7a55f2d0c2683e962a39b25619008b2d07a05611a6a3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.120: INFO: Pod "webserver-deployment-595b5b9587-5bmqw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5bmqw webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-5bmqw be8994c3-1742-4cb9-b7b9-c0ac8dd787be 10780340 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc004c1f9a7 0xc004c1f9a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.121: INFO: Pod "webserver-deployment-595b5b9587-64xdx" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-64xdx webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-64xdx af6d1c5f-c8b2-4e3e-b35e-12959a1a544b 10780351 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc004c1fb47 0xc004c1fb48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.121: INFO: Pod "webserver-deployment-595b5b9587-65l4b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-65l4b webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-65l4b 97d8b8d5-3346-488a-ab5f-65d516ef7748 10780324 0 2020-02-26 01:27:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc004c1fcd7 0xc004c1fcd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.121: INFO: Pod "webserver-deployment-595b5b9587-6z9w6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6z9w6 webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-6z9w6 6da09827-50dc-4336-b591-9e15da8d1233 10780331 0 2020-02-26 01:27:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc004c1ffc7 0xc004c1ffc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.121: INFO: Pod "webserver-deployment-595b5b9587-74n6r" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-74n6r webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-74n6r 4d514fcb-aff7-4344-a831-a135ee090b0a 10780232 0 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277c457 0xc00277c458}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-02-26 01:26:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:27:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7999abab57978a766022869ef3d0ce86ed13838839c17c5ee4ded4f151e1bba5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.122: INFO: Pod "webserver-deployment-595b5b9587-88fwj" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-88fwj webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-88fwj 54d5c81b-b0a8-4552-b0ad-ff3bc17c743c 10780184 0 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277c737 0xc00277c738}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-26 01:26:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:27:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7db43956018f4ffa831ea5dc43a2ae4b34e798c122dbb8503ae99fe1826d06b4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.122: INFO: Pod "webserver-deployment-595b5b9587-b9fs4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b9fs4 webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-b9fs4 55a29f02-c5be-477e-8c81-391efa7c8930 10780344 0 2020-02-26 01:27:21 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277c987 0xc00277c988}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-26 01:27:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.122: INFO: Pod "webserver-deployment-595b5b9587-c76xr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c76xr webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-c76xr 9c53127d-2a1f-4de3-8268-30baa002fb53 10780201 0 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277cae7 0xc00277cae8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-26 01:26:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:27:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://30c9312e6fac9eeff6085fdd049b643cbfe5663bae62dc7488c34abed5c2870b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.123: INFO: Pod "webserver-deployment-595b5b9587-fftmn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fftmn webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-fftmn a6551203-832b-48a8-ba2f-41be3ff02940 10780385 0 2020-02-26 01:27:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277cc67 0xc00277cc68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-26 01:27:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.123: INFO: Pod "webserver-deployment-595b5b9587-gvp2b" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gvp2b webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-gvp2b 58d4e5a8-0cb5-43fc-a192-72622d252523 10780217 0 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277cdc7 0xc00277cdc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-26 01:26:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:27:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c1fe053d8953bab7d96576811bd7f76302bc0f1a4e8aefb3ed01996d1aef9769,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.123: INFO: Pod "webserver-deployment-595b5b9587-hft7d" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hft7d webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-hft7d 0c9efa7d-7aa5-4ace-8a56-13e8249204b0 10780347 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277cf37 0xc00277cf38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.124: INFO: Pod "webserver-deployment-595b5b9587-kldbd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kldbd webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-kldbd bddba5b7-99e9-4698-8a70-8d0dd655b0b5 10780357 0 2020-02-26 01:27:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277d047 0xc00277d048}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-26 01:27:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.124: INFO: Pod "webserver-deployment-595b5b9587-mfj96" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mfj96 webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-mfj96 d2e0a84d-03ff-4cc7-aad5-72c64850460a 10780204 0 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277d1a7 0xc00277d1a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-26 01:26:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:27:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0de0a00fa0da47149082ab8e66bbc7558c37a914e0cf0377730d6bce3ae8176d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.124: INFO: Pod "webserver-deployment-595b5b9587-q99pb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-q99pb webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-q99pb 6c3dcf20-3b1b-4de7-a667-fc7a37bfefdb 10780192 0 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277d327 0xc00277d328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-02-26 01:26:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:27:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b3654f218c0e2c75347cb8f86227d042228d3372bb2efdd3eb7a9dadc5b6b4e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.125: INFO: Pod "webserver-deployment-595b5b9587-rb7tn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rb7tn webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-rb7tn 73ae35df-ba91-42ea-95bc-c2a8c2a9728c 10780353 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277d4a7 0xc00277d4a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.125: INFO: Pod "webserver-deployment-595b5b9587-sls8n" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sls8n webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-sls8n 6a5c5e86-b201-469e-8a52-be1884c157ea 10780336 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277d5b7 0xc00277d5b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.125: INFO: Pod "webserver-deployment-595b5b9587-twwg8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-twwg8 webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-twwg8 ac64d3a2-2a3e-4186-a0a7-8f454a5f8575 10780381 0 2020-02-26 01:27:23 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277d6d7 0xc00277d6d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-26 01:27:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.126: INFO: Pod "webserver-deployment-595b5b9587-w96gr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w96gr webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-w96gr 53bc190c-3910-40de-b8cc-e775d0c91d44 10780384 0 2020-02-26 01:27:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277d837 0xc00277d838}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-26 01:27:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.128: INFO: Pod "webserver-deployment-595b5b9587-ztxv2" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ztxv2 webserver-deployment-595b5b9587- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-595b5b9587-ztxv2 f1305cdd-ba25-44fa-ba3e-47b1e447e23e 10780197 0 2020-02-26 01:26:44 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 82f27349-46a0-46ca-aa81-77f3180ef4f8 0xc00277d987 0xc00277d988}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:26:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-26 01:26:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:27:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7016c8c8a5dd24a4b1afeeeb3e9cc8e4390eea122fe8fe2102e51d587adebcff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.128: INFO: Pod "webserver-deployment-c7997dcc8-982mx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-982mx webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-982mx d048a859-7dac-4c1e-a250-1dbd7892780b 10780350 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc00277db07 0xc00277db08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.128: INFO: Pod "webserver-deployment-c7997dcc8-9wg27" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9wg27 webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-9wg27 1b569fc2-75d8-4e1b-aa75-5bf5075a1ec4 10780282 0 2020-02-26 01:27:18 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc00277dc27 0xc00277dc28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-26 01:27:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.129: INFO: Pod "webserver-deployment-c7997dcc8-c59qf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c59qf webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-c59qf b8fb3f5b-4103-4791-9ace-d7b6ddd8e64f 10780354 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc00277dd97 0xc00277dd98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.129: INFO: Pod "webserver-deployment-c7997dcc8-f6ck2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f6ck2 webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-f6ck2 4ada872e-0c65-49da-995c-deb31e11ca83 10780271 0 2020-02-26 01:27:18 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc00277deb7 0xc00277deb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-26 01:27:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.130: INFO: Pod "webserver-deployment-c7997dcc8-jc475" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jc475 webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-jc475 188baf47-e4fa-4c4d-8b0f-9165ed123774 10780270 0 2020-02-26 01:27:18 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f4057 0xc0024f4058}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-26 01:27:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.130: INFO: Pod "webserver-deployment-c7997dcc8-k5zbz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k5zbz webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-k5zbz 883db9d0-3952-4713-beec-9ee8bef0e685 10780364 0 2020-02-26 01:27:26 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f4297 0xc0024f4298}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.131: INFO: Pod "webserver-deployment-c7997dcc8-tbxj4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tbxj4 webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-tbxj4 40185782-960c-470d-a724-8b21ca6635a0 10780335 0 2020-02-26 01:27:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f43c7 0xc0024f43c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.131: INFO: Pod "webserver-deployment-c7997dcc8-tcd2s" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tcd2s webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-tcd2s 9d1d9ea2-d459-413e-a88b-c442795e6acb 10780377 0 2020-02-26 01:27:22 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f4647 0xc0024f4648}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-26 01:27:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.131: INFO: Pod "webserver-deployment-c7997dcc8-tkcmt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tkcmt webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-tkcmt f4d47f12-8a57-46cc-a0f8-0ac929e4fdce 10780262 0 2020-02-26 01:27:18 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f4837 0xc0024f4838}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-26 01:27:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.132: INFO: Pod "webserver-deployment-c7997dcc8-vf5bf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vf5bf webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-vf5bf 7a2064e4-9240-4de5-86e4-f430a7b532d8 10780352 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f4bf7 0xc0024f4bf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.132: INFO: Pod "webserver-deployment-c7997dcc8-w7jq9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w7jq9 webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-w7jq9 ecc3ed50-8fd8-4a9d-9731-ad32265a8d96 10780287 0 2020-02-26 01:27:18 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f4df7 0xc0024f4df8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-26 01:27:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.133: INFO: Pod "webserver-deployment-c7997dcc8-x98h5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x98h5 webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-x98h5 31f1652d-28f7-4867-a6d0-5e8fa9eb5ba6 10780328 0 2020-02-26 01:27:23 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f50b7 0xc0024f50b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 26 01:27:37.133: INFO: Pod "webserver-deployment-c7997dcc8-zh8j5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zh8j5 webserver-deployment-c7997dcc8- deployment-3141 /api/v1/namespaces/deployment-3141/pods/webserver-deployment-c7997dcc8-zh8j5 b4a730b5-0661-4e00-898f-3a3493b846a2 10780348 0 2020-02-26 01:27:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9333f19a-8cdd-4ab4-a895-ca6b3ddf5390 0xc0024f5247 0xc0024f5248}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tvqt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tvqt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tvqt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:27:37.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3141" for this suite.

• [SLOW TEST:55.426 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":257,"skipped":4152,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:27:39.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 26 01:29:07.058: INFO: Pod name wrapped-volume-race-bb35da7a-a25f-47fa-99b5-ce1839568aee: Found 0 pods out of 5
Feb 26 01:29:12.245: INFO: Pod name wrapped-volume-race-bb35da7a-a25f-47fa-99b5-ce1839568aee: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bb35da7a-a25f-47fa-99b5-ce1839568aee in namespace emptydir-wrapper-713, will wait for the garbage collector to delete the pods
Feb 26 01:30:16.439: INFO: Deleting ReplicationController wrapped-volume-race-bb35da7a-a25f-47fa-99b5-ce1839568aee took: 9.999903ms
Feb 26 01:30:16.939: INFO: Terminating ReplicationController wrapped-volume-race-bb35da7a-a25f-47fa-99b5-ce1839568aee pods took: 500.638726ms
STEP: Creating RC which spawns configmap-volume pods
Feb 26 01:30:32.604: INFO: Pod name wrapped-volume-race-09ce429a-f5e8-4fb2-a98b-4cf070e0f0cf: Found 0 pods out of 5
Feb 26 01:30:37.614: INFO: Pod name wrapped-volume-race-09ce429a-f5e8-4fb2-a98b-4cf070e0f0cf: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-09ce429a-f5e8-4fb2-a98b-4cf070e0f0cf in namespace emptydir-wrapper-713, will wait for the garbage collector to delete the pods
Feb 26 01:31:09.021: INFO: Deleting ReplicationController wrapped-volume-race-09ce429a-f5e8-4fb2-a98b-4cf070e0f0cf took: 48.877074ms
Feb 26 01:31:09.422: INFO: Terminating ReplicationController wrapped-volume-race-09ce429a-f5e8-4fb2-a98b-4cf070e0f0cf pods took: 400.911541ms
STEP: Creating RC which spawns configmap-volume pods
Feb 26 01:31:24.159: INFO: Pod name wrapped-volume-race-edfe8d68-4a9e-47e8-9ad3-3b935d07c3fa: Found 0 pods out of 5
Feb 26 01:31:29.167: INFO: Pod name wrapped-volume-race-edfe8d68-4a9e-47e8-9ad3-3b935d07c3fa: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-edfe8d68-4a9e-47e8-9ad3-3b935d07c3fa in namespace emptydir-wrapper-713, will wait for the garbage collector to delete the pods
Feb 26 01:31:55.266: INFO: Deleting ReplicationController wrapped-volume-race-edfe8d68-4a9e-47e8-9ad3-3b935d07c3fa took: 18.475251ms
Feb 26 01:31:55.769: INFO: Terminating ReplicationController wrapped-volume-race-edfe8d68-4a9e-47e8-9ad3-3b935d07c3fa pods took: 502.354416ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:32:10.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-713" for this suite.

• [SLOW TEST:270.233 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":258,"skipped":4161,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:32:10.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 01:32:10.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0" in namespace "downward-api-400" to be "success or failure"
Feb 26 01:32:10.238: INFO: Pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.045527ms
Feb 26 01:32:12.254: INFO: Pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034404525s
Feb 26 01:32:14.259: INFO: Pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039157879s
Feb 26 01:32:16.451: INFO: Pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.231059182s
Feb 26 01:32:18.482: INFO: Pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261576876s
Feb 26 01:32:20.534: INFO: Pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.314350229s
Feb 26 01:32:22.548: INFO: Pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.328145075s
STEP: Saw pod success
Feb 26 01:32:22.549: INFO: Pod "downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0" satisfied condition "success or failure"
Feb 26 01:32:22.556: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0 container client-container: 
STEP: delete the pod
Feb 26 01:32:22.664: INFO: Waiting for pod downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0 to disappear
Feb 26 01:32:22.668: INFO: Pod downwardapi-volume-7bd70f35-aef4-41f7-b2d1-01e20647f5b0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:32:22.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-400" for this suite.

• [SLOW TEST:12.647 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":259,"skipped":4163,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:32:22.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 01:32:22.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7" in namespace "downward-api-9146" to be "success or failure"
Feb 26 01:32:22.933: INFO: Pod "downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.636873ms
Feb 26 01:32:24.939: INFO: Pod "downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046755563s
Feb 26 01:32:26.946: INFO: Pod "downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053892688s
Feb 26 01:32:28.993: INFO: Pod "downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10046834s
Feb 26 01:32:30.999: INFO: Pod "downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10727271s
STEP: Saw pod success
Feb 26 01:32:31.000: INFO: Pod "downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7" satisfied condition "success or failure"
Feb 26 01:32:31.003: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7 container client-container: 
STEP: delete the pod
Feb 26 01:32:31.182: INFO: Waiting for pod downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7 to disappear
Feb 26 01:32:31.220: INFO: Pod downwardapi-volume-3fb281b7-7a7d-4c6b-bcb7-61ababe4cfa7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:32:31.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9146" for this suite.

• [SLOW TEST:8.520 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":260,"skipped":4171,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:32:31.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384
STEP: creating the pod
Feb 26 01:32:31.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5036'
Feb 26 01:32:34.046: INFO: stderr: ""
Feb 26 01:32:34.047: INFO: stdout: "pod/pause created\n"
Feb 26 01:32:34.047: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 26 01:32:34.047: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5036" to be "running and ready"
Feb 26 01:32:34.068: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.020835ms
Feb 26 01:32:37.539: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.491712043s
Feb 26 01:32:39.556: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.508726397s
Feb 26 01:32:41.563: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 7.515754258s
Feb 26 01:32:41.563: INFO: Pod "pause" satisfied condition "running and ready"
Feb 26 01:32:41.563: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 26 01:32:41.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5036'
Feb 26 01:32:41.807: INFO: stderr: ""
Feb 26 01:32:41.807: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 26 01:32:41.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5036'
Feb 26 01:32:41.931: INFO: stderr: ""
Feb 26 01:32:41.932: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 26 01:32:41.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5036'
Feb 26 01:32:42.060: INFO: stderr: ""
Feb 26 01:32:42.060: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 26 01:32:42.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5036'
Feb 26 01:32:42.188: INFO: stderr: ""
Feb 26 01:32:42.188: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391
STEP: using delete to clean up resources
Feb 26 01:32:42.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5036'
Feb 26 01:32:42.328: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 26 01:32:42.328: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 26 01:32:42.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5036'
Feb 26 01:32:42.485: INFO: stderr: "No resources found in kubectl-5036 namespace.\n"
Feb 26 01:32:42.486: INFO: stdout: ""
Feb 26 01:32:42.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5036 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 01:32:42.598: INFO: stderr: ""
Feb 26 01:32:42.599: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:32:42.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5036" for this suite.

• [SLOW TEST:11.367 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":280,"completed":261,"skipped":4173,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:32:42.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Feb 26 01:32:42.898: INFO: Waiting up to 5m0s for pod "client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4" in namespace "containers-3078" to be "success or failure"
Feb 26 01:32:42.911: INFO: Pod "client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.142132ms
Feb 26 01:32:44.925: INFO: Pod "client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026792277s
Feb 26 01:32:46.932: INFO: Pod "client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034191993s
Feb 26 01:32:48.940: INFO: Pod "client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041836819s
Feb 26 01:32:50.944: INFO: Pod "client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046029806s
Feb 26 01:32:52.952: INFO: Pod "client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053303217s
STEP: Saw pod success
Feb 26 01:32:52.952: INFO: Pod "client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4" satisfied condition "success or failure"
Feb 26 01:32:52.956: INFO: Trying to get logs from node jerma-node pod client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4 container test-container: 
STEP: delete the pod
Feb 26 01:32:53.013: INFO: Waiting for pod client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4 to disappear
Feb 26 01:32:53.017: INFO: Pod client-containers-80235bfe-bd7e-4e40-8e2f-4add6496c9e4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:32:53.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3078" for this suite.

• [SLOW TEST:10.419 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":262,"skipped":4178,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:32:53.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 26 01:32:53.131: INFO: Waiting up to 5m0s for pod "pod-ed89bff1-0ab9-4218-99c2-274384668b86" in namespace "emptydir-1887" to be "success or failure"
Feb 26 01:32:53.151: INFO: Pod "pod-ed89bff1-0ab9-4218-99c2-274384668b86": Phase="Pending", Reason="", readiness=false. Elapsed: 19.841525ms
Feb 26 01:32:55.157: INFO: Pod "pod-ed89bff1-0ab9-4218-99c2-274384668b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025650524s
Feb 26 01:32:57.165: INFO: Pod "pod-ed89bff1-0ab9-4218-99c2-274384668b86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03372778s
Feb 26 01:32:59.174: INFO: Pod "pod-ed89bff1-0ab9-4218-99c2-274384668b86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042041303s
Feb 26 01:33:01.179: INFO: Pod "pod-ed89bff1-0ab9-4218-99c2-274384668b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047130752s
STEP: Saw pod success
Feb 26 01:33:01.179: INFO: Pod "pod-ed89bff1-0ab9-4218-99c2-274384668b86" satisfied condition "success or failure"
Feb 26 01:33:01.181: INFO: Trying to get logs from node jerma-node pod pod-ed89bff1-0ab9-4218-99c2-274384668b86 container test-container: 
STEP: delete the pod
Feb 26 01:33:01.388: INFO: Waiting for pod pod-ed89bff1-0ab9-4218-99c2-274384668b86 to disappear
Feb 26 01:33:01.395: INFO: Pod pod-ed89bff1-0ab9-4218-99c2-274384668b86 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:33:01.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1887" for this suite.

• [SLOW TEST:8.375 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":263,"skipped":4192,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:33:01.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-4ce173e1-56f3-411d-b3b2-860431a2b647
STEP: Creating a pod to test consume secrets
Feb 26 01:33:01.577: INFO: Waiting up to 5m0s for pod "pod-secrets-688dcc30-d372-4b65-898a-382684865fd6" in namespace "secrets-5612" to be "success or failure"
Feb 26 01:33:01.586: INFO: Pod "pod-secrets-688dcc30-d372-4b65-898a-382684865fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.535069ms
Feb 26 01:33:03.596: INFO: Pod "pod-secrets-688dcc30-d372-4b65-898a-382684865fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019267209s
Feb 26 01:33:05.605: INFO: Pod "pod-secrets-688dcc30-d372-4b65-898a-382684865fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028716071s
Feb 26 01:33:07.613: INFO: Pod "pod-secrets-688dcc30-d372-4b65-898a-382684865fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036090082s
Feb 26 01:33:09.631: INFO: Pod "pod-secrets-688dcc30-d372-4b65-898a-382684865fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054696268s
Feb 26 01:33:11.640: INFO: Pod "pod-secrets-688dcc30-d372-4b65-898a-382684865fd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063068679s
STEP: Saw pod success
Feb 26 01:33:11.640: INFO: Pod "pod-secrets-688dcc30-d372-4b65-898a-382684865fd6" satisfied condition "success or failure"
Feb 26 01:33:11.645: INFO: Trying to get logs from node jerma-node pod pod-secrets-688dcc30-d372-4b65-898a-382684865fd6 container secret-volume-test: 
STEP: delete the pod
Feb 26 01:33:11.713: INFO: Waiting for pod pod-secrets-688dcc30-d372-4b65-898a-382684865fd6 to disappear
Feb 26 01:33:11.718: INFO: Pod pod-secrets-688dcc30-d372-4b65-898a-382684865fd6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:33:11.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5612" for this suite.

• [SLOW TEST:10.329 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":264,"skipped":4211,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:33:11.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1026
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Feb 26 01:33:11.921: INFO: Found 0 stateful pods, waiting for 3
Feb 26 01:33:21.933: INFO: Found 2 stateful pods, waiting for 3
Feb 26 01:33:31.932: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 01:33:31.932: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 01:33:31.932: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 26 01:33:42.094: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 01:33:42.094: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 01:33:42.094: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 01:33:42.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1026 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 26 01:33:42.669: INFO: stderr: "I0226 01:33:42.347032    4295 log.go:172] (0xc000ad0dc0) (0xc000944f00) Create stream\nI0226 01:33:42.347503    4295 log.go:172] (0xc000ad0dc0) (0xc000944f00) Stream added, broadcasting: 1\nI0226 01:33:42.353702    4295 log.go:172] (0xc000ad0dc0) Reply frame received for 1\nI0226 01:33:42.353906    4295 log.go:172] (0xc000ad0dc0) (0xc0006b5e00) Create stream\nI0226 01:33:42.353936    4295 log.go:172] (0xc000ad0dc0) (0xc0006b5e00) Stream added, broadcasting: 3\nI0226 01:33:42.355892    4295 log.go:172] (0xc000ad0dc0) Reply frame received for 3\nI0226 01:33:42.355962    4295 log.go:172] (0xc000ad0dc0) (0xc00097a000) Create stream\nI0226 01:33:42.356001    4295 log.go:172] (0xc000ad0dc0) (0xc00097a000) Stream added, broadcasting: 5\nI0226 01:33:42.357378    4295 log.go:172] (0xc000ad0dc0) Reply frame received for 5\nI0226 01:33:42.475377    4295 log.go:172] (0xc000ad0dc0) Data frame received for 5\nI0226 01:33:42.475441    4295 log.go:172] (0xc00097a000) (5) Data frame handling\nI0226 01:33:42.475491    4295 log.go:172] (0xc00097a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 01:33:42.516382    4295 log.go:172] (0xc000ad0dc0) Data frame received for 3\nI0226 01:33:42.516493    4295 log.go:172] (0xc0006b5e00) (3) Data frame handling\nI0226 01:33:42.516542    4295 log.go:172] (0xc0006b5e00) (3) Data frame sent\nI0226 01:33:42.654810    4295 log.go:172] (0xc000ad0dc0) Data frame received for 1\nI0226 01:33:42.655333    4295 log.go:172] (0xc000ad0dc0) (0xc00097a000) Stream removed, broadcasting: 5\nI0226 01:33:42.655422    4295 log.go:172] (0xc000944f00) (1) Data frame handling\nI0226 01:33:42.655471    4295 log.go:172] (0xc000ad0dc0) (0xc0006b5e00) Stream removed, broadcasting: 3\nI0226 01:33:42.655528    4295 log.go:172] (0xc000944f00) (1) Data frame sent\nI0226 01:33:42.655549    4295 log.go:172] (0xc000ad0dc0) (0xc000944f00) Stream removed, broadcasting: 1\nI0226 01:33:42.655580    4295 log.go:172] (0xc000ad0dc0) Go away received\nI0226 01:33:42.656908    4295 log.go:172] (0xc000ad0dc0) (0xc000944f00) Stream removed, broadcasting: 1\nI0226 01:33:42.656929    4295 log.go:172] (0xc000ad0dc0) (0xc0006b5e00) Stream removed, broadcasting: 3\nI0226 01:33:42.656934    4295 log.go:172] (0xc000ad0dc0) (0xc00097a000) Stream removed, broadcasting: 5\n"
Feb 26 01:33:42.669: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 26 01:33:42.669: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 26 01:33:52.723: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 26 01:34:02.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1026 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 01:34:03.283: INFO: stderr: "I0226 01:34:03.062406    4314 log.go:172] (0xc000936000) (0xc000928320) Create stream\nI0226 01:34:03.062692    4314 log.go:172] (0xc000936000) (0xc000928320) Stream added, broadcasting: 1\nI0226 01:34:03.080464    4314 log.go:172] (0xc000936000) Reply frame received for 1\nI0226 01:34:03.080655    4314 log.go:172] (0xc000936000) (0xc0009e60a0) Create stream\nI0226 01:34:03.080674    4314 log.go:172] (0xc000936000) (0xc0009e60a0) Stream added, broadcasting: 3\nI0226 01:34:03.082310    4314 log.go:172] (0xc000936000) Reply frame received for 3\nI0226 01:34:03.082344    4314 log.go:172] (0xc000936000) (0xc0003ebea0) Create stream\nI0226 01:34:03.082352    4314 log.go:172] (0xc000936000) (0xc0003ebea0) Stream added, broadcasting: 5\nI0226 01:34:03.084530    4314 log.go:172] (0xc000936000) Reply frame received for 5\nI0226 01:34:03.194642    4314 log.go:172] (0xc000936000) Data frame received for 5\nI0226 01:34:03.194775    4314 log.go:172] (0xc0003ebea0) (5) Data frame handling\nI0226 01:34:03.194793    4314 log.go:172] (0xc0003ebea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0226 01:34:03.194899    4314 log.go:172] (0xc000936000) Data frame received for 3\nI0226 01:34:03.195014    4314 log.go:172] (0xc0009e60a0) (3) Data frame handling\nI0226 01:34:03.195043    4314 log.go:172] (0xc0009e60a0) (3) Data frame sent\nI0226 01:34:03.272131    4314 log.go:172] (0xc000936000) Data frame received for 1\nI0226 01:34:03.272172    4314 log.go:172] (0xc000936000) (0xc0009e60a0) Stream removed, broadcasting: 3\nI0226 01:34:03.272219    4314 log.go:172] (0xc000928320) (1) Data frame handling\nI0226 01:34:03.272231    4314 log.go:172] (0xc000928320) (1) Data frame sent\nI0226 01:34:03.272236    4314 log.go:172] (0xc000936000) (0xc000928320) Stream removed, broadcasting: 1\nI0226 01:34:03.272739    4314 log.go:172] (0xc000936000) (0xc0003ebea0) Stream removed, broadcasting: 5\nI0226 01:34:03.272769    4314 log.go:172] (0xc000936000) (0xc000928320) Stream removed, broadcasting: 1\nI0226 01:34:03.272796    4314 log.go:172] (0xc000936000) (0xc0009e60a0) Stream removed, broadcasting: 3\nI0226 01:34:03.272806    4314 log.go:172] (0xc000936000) Go away received\nI0226 01:34:03.272838    4314 log.go:172] (0xc000936000) (0xc0003ebea0) Stream removed, broadcasting: 5\n"
Feb 26 01:34:03.283: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 26 01:34:03.283: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 26 01:34:13.318: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
Feb 26 01:34:13.318: INFO: Waiting for Pod statefulset-1026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 26 01:34:13.318: INFO: Waiting for Pod statefulset-1026/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 26 01:34:23.351: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
Feb 26 01:34:23.351: INFO: Waiting for Pod statefulset-1026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 26 01:34:33.325: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
Feb 26 01:34:33.325: INFO: Waiting for Pod statefulset-1026/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 26 01:34:43.382: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 26 01:34:53.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1026 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 26 01:34:53.857: INFO: stderr: "I0226 01:34:53.551595    4335 log.go:172] (0xc000a2d080) (0xc0009c6280) Create stream\nI0226 01:34:53.551727    4335 log.go:172] (0xc000a2d080) (0xc0009c6280) Stream added, broadcasting: 1\nI0226 01:34:53.555184    4335 log.go:172] (0xc000a2d080) Reply frame received for 1\nI0226 01:34:53.555228    4335 log.go:172] (0xc000a2d080) (0xc0009c6320) Create stream\nI0226 01:34:53.555256    4335 log.go:172] (0xc000a2d080) (0xc0009c6320) Stream added, broadcasting: 3\nI0226 01:34:53.556398    4335 log.go:172] (0xc000a2d080) Reply frame received for 3\nI0226 01:34:53.556434    4335 log.go:172] (0xc000a2d080) (0xc0009de5a0) Create stream\nI0226 01:34:53.556445    4335 log.go:172] (0xc000a2d080) (0xc0009de5a0) Stream added, broadcasting: 5\nI0226 01:34:53.557701    4335 log.go:172] (0xc000a2d080) Reply frame received for 5\nI0226 01:34:53.676394    4335 log.go:172] (0xc000a2d080) Data frame received for 5\nI0226 01:34:53.677005    4335 log.go:172] (0xc0009de5a0) (5) Data frame handling\nI0226 01:34:53.677046    4335 log.go:172] (0xc0009de5a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0226 01:34:53.733459    4335 log.go:172] (0xc000a2d080) Data frame received for 3\nI0226 01:34:53.733506    4335 log.go:172] (0xc0009c6320) (3) Data frame handling\nI0226 01:34:53.733528    4335 log.go:172] (0xc0009c6320) (3) Data frame sent\nI0226 01:34:53.839592    4335 log.go:172] (0xc000a2d080) (0xc0009c6320) Stream removed, broadcasting: 3\nI0226 01:34:53.840260    4335 log.go:172] (0xc000a2d080) (0xc0009de5a0) Stream removed, broadcasting: 5\nI0226 01:34:53.840338    4335 log.go:172] (0xc000a2d080) Data frame received for 1\nI0226 01:34:53.840362    4335 log.go:172] (0xc0009c6280) (1) Data frame handling\nI0226 01:34:53.840381    4335 log.go:172] (0xc0009c6280) (1) Data frame sent\nI0226 01:34:53.840395    4335 log.go:172] (0xc000a2d080) (0xc0009c6280) Stream removed, broadcasting: 1\nI0226 01:34:53.840409    4335 log.go:172] (0xc000a2d080) Go away received\nI0226 01:34:53.841602    4335 log.go:172] (0xc000a2d080) (0xc0009c6280) Stream removed, broadcasting: 1\nI0226 01:34:53.841664    4335 log.go:172] (0xc000a2d080) (0xc0009c6320) Stream removed, broadcasting: 3\nI0226 01:34:53.841711    4335 log.go:172] (0xc000a2d080) (0xc0009de5a0) Stream removed, broadcasting: 5\n"
Feb 26 01:34:53.857: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 26 01:34:53.857: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 26 01:35:04.196: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 26 01:35:14.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1026 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 26 01:35:14.797: INFO: stderr: "I0226 01:35:14.550275    4354 log.go:172] (0xc000a6f1e0) (0xc000a7e280) Create stream\nI0226 01:35:14.550659    4354 log.go:172] (0xc000a6f1e0) (0xc000a7e280) Stream added, broadcasting: 1\nI0226 01:35:14.555687    4354 log.go:172] (0xc000a6f1e0) Reply frame received for 1\nI0226 01:35:14.555786    4354 log.go:172] (0xc000a6f1e0) (0xc000a7e320) Create stream\nI0226 01:35:14.555796    4354 log.go:172] (0xc000a6f1e0) (0xc000a7e320) Stream added, broadcasting: 3\nI0226 01:35:14.557676    4354 log.go:172] (0xc000a6f1e0) Reply frame received for 3\nI0226 01:35:14.557726    4354 log.go:172] (0xc000a6f1e0) (0xc000a66320) Create stream\nI0226 01:35:14.557753    4354 log.go:172] (0xc000a6f1e0) (0xc000a66320) Stream added, broadcasting: 5\nI0226 01:35:14.559785    4354 log.go:172] (0xc000a6f1e0) Reply frame received for 5\nI0226 01:35:14.664163    4354 log.go:172] (0xc000a6f1e0) Data frame received for 5\nI0226 01:35:14.664245    4354 log.go:172] (0xc000a66320) (5) Data frame handling\nI0226 01:35:14.664290    4354 log.go:172] (0xc000a66320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0226 01:35:14.665044    4354 log.go:172] (0xc000a6f1e0) Data frame received for 3\nI0226 01:35:14.665057    4354 log.go:172] (0xc000a7e320) (3) Data frame handling\nI0226 01:35:14.665070    4354 log.go:172] (0xc000a7e320) (3) Data frame sent\nI0226 01:35:14.786045    4354 log.go:172] (0xc000a6f1e0) Data frame received for 1\nI0226 01:35:14.786373    4354 log.go:172] (0xc000a6f1e0) (0xc000a7e320) Stream removed, broadcasting: 3\nI0226 01:35:14.786440    4354 log.go:172] (0xc000a7e280) (1) Data frame handling\nI0226 01:35:14.786502    4354 log.go:172] (0xc000a7e280) (1) Data frame sent\nI0226 01:35:14.786533    4354 log.go:172] (0xc000a6f1e0) (0xc000a66320) Stream removed, broadcasting: 5\nI0226 01:35:14.786610    4354 log.go:172] (0xc000a6f1e0) (0xc000a7e280) Stream removed, broadcasting: 1\nI0226 01:35:14.786730    4354 log.go:172] (0xc000a6f1e0) Go away received\nI0226 01:35:14.787305    4354 log.go:172] (0xc000a6f1e0) (0xc000a7e280) Stream removed, broadcasting: 1\nI0226 01:35:14.787356    4354 log.go:172] (0xc000a6f1e0) (0xc000a7e320) Stream removed, broadcasting: 3\nI0226 01:35:14.787384    4354 log.go:172] (0xc000a6f1e0) (0xc000a66320) Stream removed, broadcasting: 5\n"
Feb 26 01:35:14.797: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 26 01:35:14.797: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 26 01:35:14.832: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
Feb 26 01:35:14.832: INFO: Waiting for Pod statefulset-1026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:35:14.832: INFO: Waiting for Pod statefulset-1026/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:35:14.832: INFO: Waiting for Pod statefulset-1026/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:35:24.845: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
Feb 26 01:35:24.845: INFO: Waiting for Pod statefulset-1026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:35:24.845: INFO: Waiting for Pod statefulset-1026/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:35:34.843: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
Feb 26 01:35:34.843: INFO: Waiting for Pod statefulset-1026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:35:34.843: INFO: Waiting for Pod statefulset-1026/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:35:44.847: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
Feb 26 01:35:44.847: INFO: Waiting for Pod statefulset-1026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:35:54.852: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
Feb 26 01:35:54.852: INFO: Waiting for Pod statefulset-1026/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 26 01:36:04.844: INFO: Waiting for StatefulSet statefulset-1026/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 26 01:36:14.845: INFO: Deleting all statefulset in ns statefulset-1026
Feb 26 01:36:14.849: INFO: Scaling statefulset ss2 to 0
Feb 26 01:36:44.891: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 01:36:44.897: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:36:44.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1026" for this suite.

• [SLOW TEST:213.218 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":265,"skipped":4245,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:36:44.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:36:45.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-891" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":266,"skipped":4248,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:36:45.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-fcf4a600-d0fa-461c-906e-9b2e85385066
STEP: Creating a pod to test consume secrets
Feb 26 01:36:45.229: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e" in namespace "projected-6013" to be "success or failure"
Feb 26 01:36:45.232: INFO: Pod "pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.240472ms
Feb 26 01:36:47.239: INFO: Pod "pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009892758s
Feb 26 01:36:49.254: INFO: Pod "pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024853192s
Feb 26 01:36:51.281: INFO: Pod "pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051630908s
Feb 26 01:36:53.287: INFO: Pod "pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057494494s
Feb 26 01:36:55.301: INFO: Pod "pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072248527s
STEP: Saw pod success
Feb 26 01:36:55.302: INFO: Pod "pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e" satisfied condition "success or failure"
Feb 26 01:36:55.306: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e container projected-secret-volume-test: 
STEP: delete the pod
Feb 26 01:36:55.375: INFO: Waiting for pod pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e to disappear
Feb 26 01:36:55.379: INFO: Pod pod-projected-secrets-bcf04394-7289-4880-b2b0-9ce4a5685f4e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:36:55.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6013" for this suite.

• [SLOW TEST:10.272 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":267,"skipped":4278,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:36:55.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 26 01:36:55.567: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3" in namespace "projected-6600" to be "success or failure"
Feb 26 01:36:56.481: INFO: Pod "downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3": Phase="Pending", Reason="", readiness=false. Elapsed: 913.664132ms
Feb 26 01:36:58.491: INFO: Pod "downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.923879059s
Feb 26 01:37:00.610: INFO: Pod "downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.043057138s
Feb 26 01:37:02.626: INFO: Pod "downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.05886447s
Feb 26 01:37:04.634: INFO: Pod "downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.066728755s
STEP: Saw pod success
Feb 26 01:37:04.634: INFO: Pod "downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3" satisfied condition "success or failure"
Feb 26 01:37:04.638: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3 container client-container: 
STEP: delete the pod
Feb 26 01:37:05.067: INFO: Waiting for pod downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3 to disappear
Feb 26 01:37:05.128: INFO: Pod downwardapi-volume-abf83d39-0538-4755-a81a-2f9bc36638f3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:37:05.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6600" for this suite.

• [SLOW TEST:9.755 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":268,"skipped":4317,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:37:05.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 26 01:37:05.315: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:37:22.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1785" for this suite.

• [SLOW TEST:17.303 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":269,"skipped":4320,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:37:22.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 26 01:37:36.672: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:36.683: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 01:37:38.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:38.697: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 01:37:40.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:40.718: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 01:37:42.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:42.705: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 01:37:44.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:44.728: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 01:37:46.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:46.693: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 01:37:48.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:48.691: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 01:37:50.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:50.701: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 01:37:52.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 01:37:52.690: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:37:52.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4280" for this suite.

• [SLOW TEST:30.285 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":270,"skipped":4342,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:37:52.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134
Feb 26 01:37:52.848: INFO: Pod name my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134: Found 0 pods out of 1
Feb 26 01:37:57.864: INFO: Pod name my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134: Found 1 pods out of 1
Feb 26 01:37:57.864: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134" are running
Feb 26 01:38:01.880: INFO: Pod "my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134-ch2ml" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 01:37:52 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 01:37:52 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 01:37:52 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 01:37:52 +0000 UTC Reason: Message:}])
Feb 26 01:38:01.880: INFO: Trying to dial the pod
Feb 26 01:38:06.904: INFO: Controller my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134: Got expected result from replica 1 [my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134-ch2ml]: "my-hostname-basic-c162c446-f2b1-434d-b906-c3145e53d134-ch2ml", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:38:06.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-480" for this suite.

• [SLOW TEST:14.208 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":271,"skipped":4376,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:38:06.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 26 01:38:07.603: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 26 01:38:09.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:38:11.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:38:13.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:38:15.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277887, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 26 01:38:18.746: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:38:29.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2171" for this suite.
STEP: Destroying namespace "webhook-2171-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:22.268 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":272,"skipped":4389,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:38:29.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0226 01:38:32.810981       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 01:38:32.811: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:38:32.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1711" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":273,"skipped":4419,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:38:32.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Feb 26 01:38:33.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8121 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 26 01:38:45.104: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0226 01:38:43.650930    4374 log.go:172] (0xc0009968f0) (0xc000669a40) Create stream\nI0226 01:38:43.651058    4374 log.go:172] (0xc0009968f0) (0xc000669a40) Stream added, broadcasting: 1\nI0226 01:38:43.657407    4374 log.go:172] (0xc0009968f0) Reply frame received for 1\nI0226 01:38:43.657490    4374 log.go:172] (0xc0009968f0) (0xc000a940a0) Create stream\nI0226 01:38:43.657515    4374 log.go:172] (0xc0009968f0) (0xc000a940a0) Stream added, broadcasting: 3\nI0226 01:38:43.659744    4374 log.go:172] (0xc0009968f0) Reply frame received for 3\nI0226 01:38:43.659781    4374 log.go:172] (0xc0009968f0) (0xc000669ae0) Create stream\nI0226 01:38:43.659803    4374 log.go:172] (0xc0009968f0) (0xc000669ae0) Stream added, broadcasting: 5\nI0226 01:38:43.661371    4374 log.go:172] (0xc0009968f0) Reply frame received for 5\nI0226 01:38:43.661407    4374 log.go:172] (0xc0009968f0) (0xc000669b80) Create stream\nI0226 01:38:43.661424    4374 log.go:172] (0xc0009968f0) (0xc000669b80) Stream added, broadcasting: 7\nI0226 01:38:43.663901    4374 log.go:172] (0xc0009968f0) Reply frame received for 7\nI0226 01:38:43.664270    4374 log.go:172] (0xc000a940a0) (3) Writing data frame\nI0226 01:38:43.664491    4374 log.go:172] (0xc000a940a0) (3) Writing data frame\nI0226 01:38:43.670185    4374 log.go:172] (0xc0009968f0) Data frame received for 5\nI0226 01:38:43.670205    4374 log.go:172] (0xc000669ae0) (5) Data frame handling\nI0226 01:38:43.670218    4374 log.go:172] (0xc000669ae0) (5) Data frame sent\nI0226 01:38:43.673087    4374 log.go:172] (0xc0009968f0) Data frame received for 5\nI0226 01:38:43.673106    4374 log.go:172] (0xc000669ae0) (5) Data frame handling\nI0226 01:38:43.673118    4374 log.go:172] (0xc000669ae0) (5) Data frame sent\nI0226 01:38:45.062024    4374 log.go:172] (0xc0009968f0) (0xc000a940a0) Stream removed, broadcasting: 3\nI0226 01:38:45.062238    4374 log.go:172] (0xc0009968f0) Data frame received for 1\nI0226 01:38:45.062252    4374 log.go:172] (0xc000669a40) (1) Data frame handling\nI0226 01:38:45.062261    4374 log.go:172] (0xc000669a40) (1) Data frame sent\nI0226 01:38:45.062269    4374 log.go:172] (0xc0009968f0) (0xc000669a40) Stream removed, broadcasting: 1\nI0226 01:38:45.062927    4374 log.go:172] (0xc0009968f0) (0xc000669ae0) Stream removed, broadcasting: 5\nI0226 01:38:45.063076    4374 log.go:172] (0xc0009968f0) (0xc000669b80) Stream removed, broadcasting: 7\nI0226 01:38:45.063136    4374 log.go:172] (0xc0009968f0) Go away received\nI0226 01:38:45.063211    4374 log.go:172] (0xc0009968f0) (0xc000669a40) Stream removed, broadcasting: 1\nI0226 01:38:45.063229    4374 log.go:172] (0xc0009968f0) (0xc000a940a0) Stream removed, broadcasting: 3\nI0226 01:38:45.063238    4374 log.go:172] (0xc0009968f0) (0xc000669ae0) Stream removed, broadcasting: 5\nI0226 01:38:45.063245    4374 log.go:172] (0xc0009968f0) (0xc000669b80) Stream removed, broadcasting: 7\n"
Feb 26 01:38:45.104: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:38:47.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8121" for this suite.

• [SLOW TEST:14.304 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":280,"completed":274,"skipped":4425,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:38:47.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 26 01:38:47.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8763'
Feb 26 01:38:47.438: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 26 01:38:47.438: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604
Feb 26 01:38:49.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8763'
Feb 26 01:38:49.731: INFO: stderr: ""
Feb 26 01:38:49.731: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:38:49.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8763" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":280,"completed":275,"skipped":4436,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:38:49.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:38:50.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7809" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":276,"skipped":4450,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:38:50.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 26 01:38:50.398: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 26 01:38:55.439: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 26 01:39:01.453: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 26 01:39:03.462: INFO: Creating deployment "test-rollover-deployment"
Feb 26 01:39:03.563: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 26 01:39:05.578: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 26 01:39:05.588: INFO: Ensure that both replica sets have 1 created replica
Feb 26 01:39:05.598: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 26 01:39:05.607: INFO: Updating deployment test-rollover-deployment
Feb 26 01:39:05.608: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 26 01:39:07.638: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 26 01:39:07.648: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 26 01:39:07.658: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 01:39:07.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277945, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:39:09.679: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 01:39:09.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277945, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:39:11.674: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 01:39:11.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277945, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:39:13.681: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 01:39:13.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277953, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:39:15.674: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 01:39:15.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277953, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:39:17.676: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 01:39:17.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277953, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:39:19.673: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 01:39:19.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277953, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:39:21.675: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 01:39:21.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277953, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718277943, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 01:39:23.682: INFO: 
Feb 26 01:39:23.682: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 26 01:39:23.700: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-5607 /apis/apps/v1/namespaces/deployment-5607/deployments/test-rollover-deployment 5c0c3980-a104-4785-abb2-7293d1c30666 10783882 2 2020-02-26 01:39:03 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055ee2c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-26 01:39:03 +0000 UTC,LastTransitionTime:2020-02-26 01:39:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-26 01:39:23 +0000 UTC,LastTransitionTime:2020-02-26 01:39:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 26 01:39:23.706: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-5607 /apis/apps/v1/namespaces/deployment-5607/replicasets/test-rollover-deployment-574d6dfbff a5c0eb9f-1e73-4f0a-9b1d-99ccd36b11d5 10783872 2 2020-02-26 01:39:05 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 5c0c3980-a104-4785-abb2-7293d1c30666 0xc0055ee757 0xc0055ee758}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055ee7c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 26 01:39:23.706: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 26 01:39:23.706: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-5607 /apis/apps/v1/namespaces/deployment-5607/replicasets/test-rollover-controller be9fc8e6-755f-427a-bc5b-001929c8c03f 10783881 2 2020-02-26 01:38:50 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 5c0c3980-a104-4785-abb2-7293d1c30666 0xc0055ee66f 0xc0055ee680}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0055ee6e8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 26 01:39:23.707: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-5607 /apis/apps/v1/namespaces/deployment-5607/replicasets/test-rollover-deployment-f6c94f66c d1a87eab-a704-4ef4-bab5-dc0a5c99fbf4 10783814 2 2020-02-26 01:39:03 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 5c0c3980-a104-4785-abb2-7293d1c30666 0xc0055ee830 0xc0055ee831}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055ee8a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 26 01:39:23.710: INFO: Pod "test-rollover-deployment-574d6dfbff-m82tj" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-m82tj test-rollover-deployment-574d6dfbff- deployment-5607 /api/v1/namespaces/deployment-5607/pods/test-rollover-deployment-574d6dfbff-m82tj 750b59f7-4163-46a0-bfce-e65f89e49260 10783846 0 2020-02-26 01:39:05 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff a5c0eb9f-1e73-4f0a-9b1d-99ccd36b11d5 0xc0055eedd7 0xc0055eedd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xn7vt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xn7vt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xn7vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:39:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:39:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:39:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-26 01:39:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-26 01:39:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-26 01:39:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e09935afbc356b7a76bfe1165379126a923e016979eb3b41a5ff5a1a7a5271d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:39:23.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5607" for this suite.

• [SLOW TEST:33.486 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":277,"skipped":4480,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:39:23.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466
STEP: creating an pod
Feb 26 01:39:24.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-9663 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb 26 01:39:24.990: INFO: stderr: ""
Feb 26 01:39:24.990: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Waiting for log generator to start.
Feb 26 01:39:24.990: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb 26 01:39:24.990: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9663" to be "running and ready, or succeeded"
Feb 26 01:39:25.003: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.487463ms
Feb 26 01:39:27.022: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031653079s
Feb 26 01:39:29.127: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137217619s
Feb 26 01:39:31.134: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143332512s
Feb 26 01:39:33.217: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227009108s
Feb 26 01:39:35.226: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.235375844s
Feb 26 01:39:37.235: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 12.244487431s
Feb 26 01:39:37.235: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb 26 01:39:37.235: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb 26 01:39:37.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9663'
Feb 26 01:39:37.540: INFO: stderr: ""
Feb 26 01:39:37.540: INFO: stdout: "I0226 01:39:34.742908       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/f5d 448\nI0226 01:39:34.943288       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/bpjh 584\nI0226 01:39:35.143142       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/5bx 336\nI0226 01:39:35.343581       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/wmfs 504\nI0226 01:39:35.543201       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/tzgn 329\nI0226 01:39:35.743166       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/ljm6 335\nI0226 01:39:35.943257       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/kzs 400\nI0226 01:39:36.143205       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/qtr 460\nI0226 01:39:36.343294       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/p8nz 238\nI0226 01:39:36.543472       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/f4p 291\nI0226 01:39:36.743232       1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/6t5 421\nI0226 01:39:36.943324       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/v96 441\nI0226 01:39:37.143706       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/zn2z 516\nI0226 01:39:37.343449       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/hj2r 201\n"
STEP: limiting log lines
Feb 26 01:39:37.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9663 --tail=1'
Feb 26 01:39:37.687: INFO: stderr: ""
Feb 26 01:39:37.687: INFO: stdout: "I0226 01:39:37.543471       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/fc4k 320\n"
Feb 26 01:39:37.687: INFO: got output "I0226 01:39:37.543471       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/fc4k 320\n"
STEP: limiting log bytes
Feb 26 01:39:37.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9663 --limit-bytes=1'
Feb 26 01:39:37.905: INFO: stderr: ""
Feb 26 01:39:37.905: INFO: stdout: "I"
Feb 26 01:39:37.905: INFO: got output "I"
STEP: exposing timestamps
Feb 26 01:39:37.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9663 --tail=1 --timestamps'
Feb 26 01:39:38.022: INFO: stderr: ""
Feb 26 01:39:38.022: INFO: stdout: "2020-02-26T01:39:37.943634008Z I0226 01:39:37.943149       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/vj6 567\n"
Feb 26 01:39:38.022: INFO: got output "2020-02-26T01:39:37.943634008Z I0226 01:39:37.943149       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/vj6 567\n"
STEP: restricting to a time range
Feb 26 01:39:40.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9663 --since=1s'
Feb 26 01:39:40.712: INFO: stderr: ""
Feb 26 01:39:40.712: INFO: stdout: "I0226 01:39:39.743137       1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/42l 362\nI0226 01:39:39.943247       1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/kls 589\nI0226 01:39:40.143260       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/kq9x 473\nI0226 01:39:40.343262       1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/6wk2 525\nI0226 01:39:40.543469       1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/8knn 417\n"
Feb 26 01:39:40.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9663 --since=24h'
Feb 26 01:39:40.869: INFO: stderr: ""
Feb 26 01:39:40.870: INFO: stdout: "I0226 01:39:34.742908       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/f5d 448\nI0226 01:39:34.943288       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/bpjh 584\nI0226 01:39:35.143142       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/5bx 336\nI0226 01:39:35.343581       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/wmfs 504\nI0226 01:39:35.543201       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/tzgn 329\nI0226 01:39:35.743166       1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/ljm6 335\nI0226 01:39:35.943257       1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/kzs 400\nI0226 01:39:36.143205       1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/qtr 460\nI0226 01:39:36.343294       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/p8nz 238\nI0226 01:39:36.543472       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/f4p 291\nI0226 01:39:36.743232       1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/6t5 421\nI0226 01:39:36.943324       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/v96 441\nI0226 01:39:37.143706       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/zn2z 516\nI0226 01:39:37.343449       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/hj2r 201\nI0226 01:39:37.543471       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/fc4k 320\nI0226 01:39:37.743314       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/jpln 477\nI0226 01:39:37.943149       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/vj6 567\nI0226 01:39:38.143202       1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/tjn7 338\nI0226 01:39:38.343267       1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/bzwr 478\nI0226 01:39:38.543397       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/5dw 490\nI0226 01:39:38.743317       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/pwnz 448\nI0226 01:39:38.943312       1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/w46 405\nI0226 01:39:39.143293       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/8dd 399\nI0226 01:39:39.343373       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/95q 390\nI0226 01:39:39.543268       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/9sgg 203\nI0226 01:39:39.743137       1 logs_generator.go:76] 25 POST /api/v1/namespaces/ns/pods/42l 362\nI0226 01:39:39.943247       1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/kls 589\nI0226 01:39:40.143260       1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/kq9x 473\nI0226 01:39:40.343262       1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/6wk2 525\nI0226 01:39:40.543469       1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/8knn 417\nI0226 01:39:40.743337       1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/zmj4 452\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472
Feb 26 01:39:40.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9663'
Feb 26 01:39:52.351: INFO: stderr: ""
Feb 26 01:39:52.351: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:39:52.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9663" for this suite.

• [SLOW TEST:28.631 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":280,"completed":278,"skipped":4487,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 26 01:39:52.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-36a4ec6d-a891-497e-8ac6-954c507e5190
STEP: Creating a pod to test consume configMaps
Feb 26 01:39:52.482: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05" in namespace "projected-1684" to be "success or failure"
Feb 26 01:39:52.497: INFO: Pod "pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05": Phase="Pending", Reason="", readiness=false. Elapsed: 15.263422ms
Feb 26 01:39:54.524: INFO: Pod "pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04212942s
Feb 26 01:39:56.543: INFO: Pod "pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061022291s
Feb 26 01:39:58.558: INFO: Pod "pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076426638s
Feb 26 01:40:00.570: INFO: Pod "pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088446818s
Feb 26 01:40:02.580: INFO: Pod "pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098293937s
STEP: Saw pod success
Feb 26 01:40:02.580: INFO: Pod "pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05" satisfied condition "success or failure"
Feb 26 01:40:02.596: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 01:40:02.863: INFO: Waiting for pod pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05 to disappear
Feb 26 01:40:02.887: INFO: Pod pod-projected-configmaps-47d6c583-20c9-4865-93f0-fcf8d873fb05 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 26 01:40:02.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1684" for this suite.

• [SLOW TEST:10.550 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":279,"skipped":4557,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSFeb 26 01:40:02.918: INFO: Running AfterSuite actions on all nodes
Feb 26 01:40:02.918: INFO: Running AfterSuite actions on node 1
Feb 26 01:40:02.918: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339

Ran 280 of 4845 Specs in 7250.380 seconds
FAIL! -- 279 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (7250.52s)
FAIL